00:00:00.001 Started by upstream project "autotest-per-patch" build number 132294 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.048 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.050 The recommended git tool is: git 00:00:00.050 using credential 00000000-0000-0000-0000-000000000002 00:00:00.052 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.090 Fetching changes from the remote Git repository 00:00:00.092 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.152 Using shallow fetch with depth 1 00:00:00.152 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.152 > git --version # timeout=10 00:00:00.214 > git --version # 'git version 2.39.2' 00:00:00.214 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.252 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.253 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.625 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.637 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.650 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.650 > git config core.sparsecheckout # timeout=10 00:00:05.661 > git read-tree -mu HEAD # timeout=10 00:00:05.678 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.700 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.700 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.830 [Pipeline] Start of Pipeline 00:00:05.844 [Pipeline] library 00:00:05.846 Loading library shm_lib@master 00:00:05.846 Library shm_lib@master is cached. Copying from home. 00:00:05.865 [Pipeline] node 00:00:05.873 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.876 [Pipeline] { 00:00:05.885 [Pipeline] catchError 00:00:05.886 [Pipeline] { 00:00:05.895 [Pipeline] wrap 00:00:05.902 [Pipeline] { 00:00:05.910 [Pipeline] stage 00:00:05.912 [Pipeline] { (Prologue) 00:00:06.119 [Pipeline] sh 00:00:06.401 + logger -p user.info -t JENKINS-CI 00:00:06.417 [Pipeline] echo 00:00:06.418 Node: GP6 00:00:06.425 [Pipeline] sh 00:00:06.719 [Pipeline] setCustomBuildProperty 00:00:06.728 [Pipeline] echo 00:00:06.729 Cleanup processes 00:00:06.733 [Pipeline] sh 00:00:07.016 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.016 2726004 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.028 [Pipeline] sh 00:00:07.311 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.311 ++ awk '{print $1}' 00:00:07.311 ++ grep -v 'sudo pgrep' 00:00:07.311 + sudo kill -9 00:00:07.311 + true 00:00:07.323 [Pipeline] cleanWs 00:00:07.333 [WS-CLEANUP] Deleting project workspace... 00:00:07.333 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.341 [WS-CLEANUP] done 00:00:07.345 [Pipeline] setCustomBuildProperty 00:00:07.362 [Pipeline] sh 00:00:07.645 + sudo git config --global --replace-all safe.directory '*' 00:00:07.753 [Pipeline] httpRequest 00:00:08.219 [Pipeline] echo 00:00:08.220 Sorcerer 10.211.164.101 is alive 00:00:08.227 [Pipeline] retry 00:00:08.228 [Pipeline] { 00:00:08.239 [Pipeline] httpRequest 00:00:08.243 HttpMethod: GET 00:00:08.243 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.244 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.263 Response Code: HTTP/1.1 200 OK 00:00:08.264 Success: Status code 200 is in the accepted range: 200,404 00:00:08.264 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:15.197 [Pipeline] } 00:00:15.218 [Pipeline] // retry 00:00:15.226 [Pipeline] sh 00:00:15.515 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:15.533 [Pipeline] httpRequest 00:00:15.948 [Pipeline] echo 00:00:15.950 Sorcerer 10.211.164.101 is alive 00:00:15.959 [Pipeline] retry 00:00:15.962 [Pipeline] { 00:00:15.977 [Pipeline] httpRequest 00:00:15.982 HttpMethod: GET 00:00:15.982 URL: http://10.211.164.101/packages/spdk_8531656d379a9809102b4858f69950decf92a1c5.tar.gz 00:00:15.983 Sending request to url: http://10.211.164.101/packages/spdk_8531656d379a9809102b4858f69950decf92a1c5.tar.gz 00:00:16.005 Response Code: HTTP/1.1 200 OK 00:00:16.006 Success: Status code 200 is in the accepted range: 200,404 00:00:16.006 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_8531656d379a9809102b4858f69950decf92a1c5.tar.gz 00:01:05.762 [Pipeline] } 00:01:05.781 [Pipeline] // retry 00:01:05.790 [Pipeline] sh 00:01:06.091 + tar --no-same-owner -xf spdk_8531656d379a9809102b4858f69950decf92a1c5.tar.gz 00:01:08.647 [Pipeline] sh 00:01:08.933 + git -C spdk log --oneline -n5 00:01:08.933 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:08.933 318515b44 nvme/perf: interrupt mode support for pcie controller 00:01:08.933 7bc1134d6 test/scheduler: Read PID's status file only once 00:01:08.933 0b65bb478 test/scheduler: Account for multiple cpus in the affinity mask 00:01:08.933 a96685099 test/nvmf: Tweak nvme_connect() 00:01:08.945 [Pipeline] } 00:01:08.960 [Pipeline] // stage 00:01:08.969 [Pipeline] stage 00:01:08.971 [Pipeline] { (Prepare) 00:01:08.988 [Pipeline] writeFile 00:01:09.004 [Pipeline] sh 00:01:09.288 + logger -p user.info -t JENKINS-CI 00:01:09.301 [Pipeline] sh 00:01:09.584 + logger -p user.info -t JENKINS-CI 00:01:09.597 [Pipeline] sh 00:01:09.884 + cat autorun-spdk.conf 00:01:09.884 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.884 SPDK_TEST_NVMF=1 00:01:09.884 SPDK_TEST_NVME_CLI=1 00:01:09.884 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:09.884 SPDK_TEST_NVMF_NICS=e810 00:01:09.884 SPDK_TEST_VFIOUSER=1 00:01:09.884 SPDK_RUN_UBSAN=1 00:01:09.884 NET_TYPE=phy 00:01:09.892 RUN_NIGHTLY=0 00:01:09.897 [Pipeline] readFile 00:01:09.920 [Pipeline] withEnv 00:01:09.922 [Pipeline] { 00:01:09.933 [Pipeline] sh 00:01:10.218 + set -ex 00:01:10.218 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:10.218 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:10.218 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.218 ++ SPDK_TEST_NVMF=1 00:01:10.218 ++ SPDK_TEST_NVME_CLI=1 00:01:10.218 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.218 ++ SPDK_TEST_NVMF_NICS=e810 00:01:10.218 ++ SPDK_TEST_VFIOUSER=1 00:01:10.218 ++ SPDK_RUN_UBSAN=1 00:01:10.218 ++ NET_TYPE=phy 00:01:10.218 ++ RUN_NIGHTLY=0 00:01:10.218 + case $SPDK_TEST_NVMF_NICS in 00:01:10.218 + DRIVERS=ice 00:01:10.218 + [[ tcp == \r\d\m\a ]] 00:01:10.218 + [[ -n ice ]] 00:01:10.218 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:10.218 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:10.218 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:10.218 rmmod: ERROR: Module irdma is not currently loaded 00:01:10.218 rmmod: ERROR: Module i40iw is not currently loaded 00:01:10.218 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:10.218 + true 00:01:10.218 + for D in $DRIVERS 00:01:10.218 + sudo modprobe ice 00:01:10.218 + exit 0 00:01:10.228 [Pipeline] } 00:01:10.241 [Pipeline] // withEnv 00:01:10.246 [Pipeline] } 00:01:10.259 [Pipeline] // stage 00:01:10.269 [Pipeline] catchError 00:01:10.270 [Pipeline] { 00:01:10.282 [Pipeline] timeout 00:01:10.282 Timeout set to expire in 1 hr 0 min 00:01:10.284 [Pipeline] { 00:01:10.297 [Pipeline] stage 00:01:10.299 [Pipeline] { (Tests) 00:01:10.313 [Pipeline] sh 00:01:10.599 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.599 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.599 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.599 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:10.599 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:10.599 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.599 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:10.599 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.599 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.599 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.599 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:10.599 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.599 + source /etc/os-release 00:01:10.599 ++ NAME='Fedora Linux' 00:01:10.599 ++ VERSION='39 (Cloud Edition)' 00:01:10.599 ++ ID=fedora 00:01:10.599 ++ VERSION_ID=39 00:01:10.599 ++ VERSION_CODENAME= 00:01:10.599 ++ PLATFORM_ID=platform:f39 00:01:10.599 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:10.599 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:10.599 ++ LOGO=fedora-logo-icon 00:01:10.599 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:10.599 ++ HOME_URL=https://fedoraproject.org/ 00:01:10.599 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:10.599 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:10.599 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:10.599 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:10.599 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:10.599 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:10.599 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:10.599 ++ SUPPORT_END=2024-11-12 00:01:10.599 ++ VARIANT='Cloud Edition' 00:01:10.599 ++ VARIANT_ID=cloud 00:01:10.599 + uname -a 00:01:10.599 Linux spdk-gp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:10.599 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:11.540 Hugepages 00:01:11.540 node hugesize free / total 00:01:11.540 node0 1048576kB 0 / 0 00:01:11.540 node0 2048kB 0 / 0 00:01:11.540 node1 1048576kB 0 / 0 00:01:11.540 node1 2048kB 0 / 0 00:01:11.540 00:01:11.540 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:11.540 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:11.540 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:11.540 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:11.540 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:11.540 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:11.540 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:11.540 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:11.540 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:11.540 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:11.540 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:11.540 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:11.540 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:11.540 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:11.540 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:11.540 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:11.540 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:11.540 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:11.540 + rm -f /tmp/spdk-ld-path 00:01:11.540 + source autorun-spdk.conf 00:01:11.540 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.540 ++ SPDK_TEST_NVMF=1 00:01:11.540 ++ SPDK_TEST_NVME_CLI=1 00:01:11.540 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.540 ++ SPDK_TEST_NVMF_NICS=e810 00:01:11.540 ++ SPDK_TEST_VFIOUSER=1 00:01:11.540 ++ SPDK_RUN_UBSAN=1 00:01:11.540 ++ NET_TYPE=phy 00:01:11.540 ++ RUN_NIGHTLY=0 00:01:11.540 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:11.540 + [[ -n '' ]] 00:01:11.540 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:11.799 + for M in /var/spdk/build-*-manifest.txt 00:01:11.799 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:11.799 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:11.799 + for M in /var/spdk/build-*-manifest.txt 00:01:11.799 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:11.799 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:11.799 + for M in /var/spdk/build-*-manifest.txt 00:01:11.799 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:11.799 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:11.799 ++ uname 00:01:11.799 + [[ Linux == \L\i\n\u\x ]] 00:01:11.799 + sudo dmesg -T 00:01:11.799 + sudo dmesg --clear 00:01:11.799 + dmesg_pid=2726685 00:01:11.799 + [[ Fedora Linux == FreeBSD ]] 00:01:11.799 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:11.799 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:11.799 + sudo dmesg -Tw 00:01:11.799 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:11.799 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:11.799 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:11.799 + [[ -x /usr/src/fio-static/fio ]] 00:01:11.799 + export FIO_BIN=/usr/src/fio-static/fio 00:01:11.799 + FIO_BIN=/usr/src/fio-static/fio 00:01:11.799 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:11.799 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:11.799 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:11.799 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:11.799 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:11.799 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:11.799 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:11.799 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:11.799 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.799 11:19:52 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:11.799 11:19:52 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.799 11:19:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.799 11:19:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:11.799 11:19:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:11.799 11:19:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.799 11:19:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:11.799 11:19:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:11.799 11:19:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:11.799 11:19:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:11.799 11:19:52 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:11.799 11:19:52 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:11.799 11:19:52 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.799 11:19:52 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:11.799 11:19:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:11.799 11:19:52 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:11.799 11:19:52 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:11.799 11:19:52 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:11.799 11:19:52 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:11.799 11:19:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.799 11:19:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.800 11:19:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.800 11:19:52 -- paths/export.sh@5 -- $ export PATH 00:01:11.800 11:19:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:11.800 11:19:52 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:11.800 11:19:52 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:11.800 11:19:52 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731665992.XXXXXX 00:01:11.800 11:19:52 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731665992.enD2ei 00:01:11.800 11:19:52 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:11.800 11:19:52 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:11.800 11:19:52 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:11.800 11:19:52 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:11.800 11:19:52 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:11.800 11:19:52 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:11.800 11:19:52 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:11.800 11:19:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.800 11:19:52 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:11.800 11:19:52 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:11.800 11:19:52 -- pm/common@17 -- $ local monitor 00:01:11.800 11:19:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.800 11:19:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.800 11:19:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.800 11:19:52 -- pm/common@21 -- $ date +%s 00:01:11.800 11:19:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:11.800 11:19:52 -- pm/common@21 -- $ date +%s 00:01:11.800 11:19:52 -- pm/common@25 -- $ sleep 1 00:01:11.800 11:19:52 -- pm/common@21 -- $ date +%s 00:01:11.800 11:19:52 -- pm/common@21 -- $ date +%s 00:01:11.800 11:19:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731665992 00:01:11.800 11:19:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731665992 00:01:11.800 11:19:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731665992 00:01:11.800 11:19:52 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731665992 00:01:11.800 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731665992_collect-vmstat.pm.log 00:01:11.800 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731665992_collect-cpu-load.pm.log 00:01:11.800 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731665992_collect-cpu-temp.pm.log 00:01:11.800 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731665992_collect-bmc-pm.bmc.pm.log 00:01:12.736 11:19:53 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:12.736 11:19:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:12.736 11:19:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:12.736 11:19:53 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.736 11:19:53 -- spdk/autobuild.sh@16 -- $ date -u 00:01:12.736 Fri Nov 15 10:19:53 AM UTC 2024 00:01:12.736 11:19:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:12.736 v25.01-pre-186-g8531656d3 00:01:12.736 11:19:53 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:12.736 11:19:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:12.736 11:19:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:12.736 11:19:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:12.736 11:19:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:12.736 11:19:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.994 ************************************ 00:01:12.994 START TEST ubsan 00:01:12.994 ************************************ 00:01:12.994 11:19:53 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:12.994 using ubsan 00:01:12.994 00:01:12.994 real 0m0.000s 00:01:12.994 user 0m0.000s 00:01:12.994 sys 0m0.000s 00:01:12.994 11:19:53 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:12.994 11:19:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:12.994 ************************************ 00:01:12.994 END TEST ubsan 00:01:12.994 ************************************ 00:01:12.994 11:19:53 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:12.994 11:19:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:12.994 11:19:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:12.994 11:19:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:12.994 11:19:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:12.994 11:19:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:12.994 11:19:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:12.994 11:19:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:12.994 11:19:53 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:12.994 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:12.994 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:13.254 Using 'verbs' RDMA provider 00:01:23.809 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:33.789 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:33.789 Creating mk/config.mk...done. 00:01:33.789 Creating mk/cc.flags.mk...done. 00:01:33.789 Type 'make' to build. 00:01:33.789 11:20:14 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:33.789 11:20:14 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:33.789 11:20:14 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:33.789 11:20:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.047 ************************************ 00:01:34.047 START TEST make 00:01:34.047 ************************************ 00:01:34.047 11:20:14 make -- common/autotest_common.sh@1129 -- $ make -j48 00:01:34.310 make[1]: Nothing to be done for 'all'. 00:01:36.226 The Meson build system 00:01:36.226 Version: 1.5.0 00:01:36.226 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:36.226 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:36.226 Build type: native build 00:01:36.226 Project name: libvfio-user 00:01:36.226 Project version: 0.0.1 00:01:36.226 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:36.226 C linker for the host machine: cc ld.bfd 2.40-14 00:01:36.226 Host machine cpu family: x86_64 00:01:36.226 Host machine cpu: x86_64 00:01:36.226 Run-time dependency threads found: YES 00:01:36.226 Library dl found: YES 00:01:36.226 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:36.226 Run-time dependency json-c found: YES 0.17 00:01:36.226 Run-time dependency cmocka found: YES 1.1.7 00:01:36.226 Program pytest-3 found: NO 00:01:36.226 Program flake8 found: NO 00:01:36.226 Program misspell-fixer found: NO 00:01:36.226 Program restructuredtext-lint found: NO 00:01:36.226 Program valgrind found: YES (/usr/bin/valgrind) 00:01:36.226 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.226 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.226 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.227 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:36.227 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:36.227 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:36.227 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:36.227 Build targets in project: 8 00:01:36.227 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:36.227 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:36.227 00:01:36.227 libvfio-user 0.0.1 00:01:36.227 00:01:36.227 User defined options 00:01:36.227 buildtype : debug 00:01:36.227 default_library: shared 00:01:36.227 libdir : /usr/local/lib 00:01:36.227 00:01:36.227 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:36.804 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:37.065 [1/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:37.065 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:37.065 [3/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:37.065 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:37.065 [5/37] Compiling C object samples/null.p/null.c.o 00:01:37.066 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:37.066 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:37.066 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:37.066 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:37.066 [10/37] Compiling C object samples/server.p/server.c.o 00:01:37.066 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:37.066 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:37.066 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:37.066 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:37.066 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:37.066 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:37.066 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:37.326 [18/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:37.326 [19/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:37.326 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:37.326 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:37.326 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:37.326 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:37.326 [24/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:37.326 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:37.326 [26/37] Compiling C object samples/client.p/client.c.o 00:01:37.326 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:37.326 [28/37] Linking target samples/client 00:01:37.326 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:37.326 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:37.589 [31/37] Linking target test/unit_tests 00:01:37.589 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:37.589 [33/37] Linking target samples/null 00:01:37.589 [34/37] Linking target samples/lspci 00:01:37.589 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:37.589 [36/37] Linking target samples/gpio-pci-idio-16 00:01:37.589 [37/37] Linking target samples/server 00:01:37.589 INFO: autodetecting backend as ninja 00:01:37.589 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:37.851 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:38.431 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:38.431 ninja: no work to do. 00:01:43.695 The Meson build system 00:01:43.695 Version: 1.5.0 00:01:43.695 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:43.695 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:43.695 Build type: native build 00:01:43.695 Program cat found: YES (/usr/bin/cat) 00:01:43.695 Project name: DPDK 00:01:43.695 Project version: 24.03.0 00:01:43.695 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:43.695 C linker for the host machine: cc ld.bfd 2.40-14 00:01:43.695 Host machine cpu family: x86_64 00:01:43.695 Host machine cpu: x86_64 00:01:43.695 Message: ## Building in Developer Mode ## 00:01:43.695 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:43.695 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:43.695 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:43.695 Program python3 found: YES (/usr/bin/python3) 00:01:43.695 Program cat found: YES (/usr/bin/cat) 00:01:43.695 Compiler for C supports arguments -march=native: YES 00:01:43.695 Checking for size of "void *" : 8 00:01:43.695 Checking for size of "void *" : 8 (cached) 00:01:43.695 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:43.695 Library m found: YES 00:01:43.695 Library numa found: YES 00:01:43.695 Has header "numaif.h" : YES 00:01:43.695 Library fdt found: NO 00:01:43.695 Library execinfo found: NO 00:01:43.695 Has header "execinfo.h" : YES 00:01:43.695 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:43.695 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:43.695 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:43.695 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:43.695 Run-time dependency openssl found: YES 3.1.1 00:01:43.695 Run-time dependency libpcap found: YES 1.10.4 00:01:43.695 Has header "pcap.h" with dependency libpcap: YES 00:01:43.695 Compiler for C supports arguments -Wcast-qual: YES 00:01:43.695 Compiler for C supports arguments -Wdeprecated: YES 00:01:43.695 Compiler for C supports arguments -Wformat: YES 00:01:43.695 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:43.695 Compiler for C supports arguments -Wformat-security: NO 00:01:43.695 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:43.695 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:43.695 Compiler for C supports arguments -Wnested-externs: YES 00:01:43.695 Compiler for C supports arguments -Wold-style-definition: YES 00:01:43.695 Compiler for C supports arguments -Wpointer-arith: YES 00:01:43.695 Compiler for C supports arguments -Wsign-compare: YES 00:01:43.695 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:43.695 Compiler for C supports arguments -Wundef: YES 00:01:43.695 Compiler for C supports arguments -Wwrite-strings: YES 00:01:43.695 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:43.695 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:43.695 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:43.695 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:43.695 Program objdump found: YES (/usr/bin/objdump) 00:01:43.695 Compiler for C supports arguments -mavx512f: YES 00:01:43.695 Checking if "AVX512 checking" compiles: YES 00:01:43.695 Fetching value of define "__SSE4_2__" : 1 00:01:43.695 Fetching value of define "__AES__" : 1 00:01:43.695 Fetching value of define "__AVX__" : 1 00:01:43.695 Fetching value of define "__AVX2__" : (undefined) 00:01:43.695 Fetching value of define "__AVX512BW__" : (undefined) 00:01:43.695 Fetching value of define "__AVX512CD__" : (undefined) 00:01:43.695 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:43.695 Fetching value of define "__AVX512F__" : (undefined) 00:01:43.695 Fetching value of define "__AVX512VL__" : (undefined) 00:01:43.695 Fetching value of define "__PCLMUL__" : 1 00:01:43.695 Fetching value of define "__RDRND__" : 1 00:01:43.695 Fetching value of define "__RDSEED__" : (undefined) 00:01:43.695 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:43.696 Fetching value of define "__znver1__" : (undefined) 00:01:43.696 Fetching value of define "__znver2__" : (undefined) 00:01:43.696 Fetching value of define "__znver3__" : (undefined) 00:01:43.696 Fetching value of define "__znver4__" : (undefined) 00:01:43.696 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:43.696 Message: lib/log: Defining dependency "log" 00:01:43.696 Message: lib/kvargs: Defining dependency "kvargs" 00:01:43.696 Message: lib/telemetry: Defining dependency "telemetry" 00:01:43.696 Checking for function "getentropy" : NO 00:01:43.696 Message: lib/eal: Defining dependency "eal" 00:01:43.696 Message: lib/ring: Defining dependency "ring" 00:01:43.696 Message: lib/rcu: Defining dependency "rcu" 00:01:43.696 Message: lib/mempool: Defining dependency "mempool" 00:01:43.696 Message: lib/mbuf: Defining dependency "mbuf" 00:01:43.696 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:43.696 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:43.696 Compiler for C supports arguments -mpclmul: YES 00:01:43.696 Compiler for C supports arguments -maes: YES 00:01:43.696 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:43.696 Compiler for C supports arguments -mavx512bw: YES 00:01:43.696 Compiler for C supports arguments -mavx512dq: YES 00:01:43.696 Compiler for C supports arguments -mavx512vl: YES 00:01:43.696 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:43.696 Compiler for C supports arguments -mavx2: YES 00:01:43.696 Compiler for C supports arguments -mavx: YES 00:01:43.696 Message: lib/net: Defining dependency "net" 00:01:43.696 Message: lib/meter: Defining dependency "meter" 00:01:43.696 Message: lib/ethdev: Defining dependency "ethdev" 00:01:43.696 Message: lib/pci: Defining dependency "pci" 00:01:43.696 Message: lib/cmdline: Defining dependency "cmdline" 00:01:43.696 Message: lib/hash: Defining dependency "hash" 00:01:43.696 Message: lib/timer: Defining dependency "timer" 00:01:43.696 Message: lib/compressdev: Defining dependency "compressdev" 00:01:43.696 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:43.696 Message: lib/dmadev: Defining dependency "dmadev" 00:01:43.696 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:43.696 Message: lib/power: Defining dependency "power" 00:01:43.696 Message: lib/reorder: Defining dependency "reorder" 00:01:43.696 Message: lib/security: Defining dependency "security" 00:01:43.696 Has header "linux/userfaultfd.h" : YES 00:01:43.696 Has header "linux/vduse.h" : YES 00:01:43.696 Message: lib/vhost: Defining dependency "vhost" 00:01:43.696 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:43.696 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:43.696 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:43.696 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:43.696 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:43.696 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:43.696 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:43.696 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:43.696 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:43.696 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:43.696 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:43.696 Configuring doxy-api-html.conf using configuration 00:01:43.696 Configuring doxy-api-man.conf using configuration 00:01:43.696 Program mandb found: YES (/usr/bin/mandb) 00:01:43.696 Program sphinx-build found: NO 00:01:43.696 Configuring rte_build_config.h using configuration 00:01:43.696 Message: 00:01:43.696 ================= 00:01:43.696 Applications Enabled 00:01:43.696 ================= 00:01:43.696 00:01:43.696 apps: 00:01:43.696 00:01:43.696 00:01:43.696 Message: 00:01:43.696 ================= 00:01:43.696 Libraries Enabled 00:01:43.696 ================= 00:01:43.696 00:01:43.696 libs: 00:01:43.696 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:43.696 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:43.696 cryptodev, dmadev, power, reorder, security, vhost, 00:01:43.696 00:01:43.696 Message: 00:01:43.696 =============== 00:01:43.696 Drivers Enabled 00:01:43.696 =============== 00:01:43.696 00:01:43.696 common: 00:01:43.696 00:01:43.696 bus: 00:01:43.696 pci, vdev, 00:01:43.696 mempool: 00:01:43.696 ring, 00:01:43.696 dma: 00:01:43.696 00:01:43.696 net: 00:01:43.696 00:01:43.696 crypto: 00:01:43.696 00:01:43.696 compress: 00:01:43.696 00:01:43.696 vdpa: 00:01:43.696 00:01:43.696 00:01:43.696 Message: 00:01:43.696 ================= 00:01:43.696 Content Skipped 00:01:43.696 ================= 00:01:43.696 00:01:43.696 apps: 00:01:43.696 dumpcap: explicitly disabled via build config 00:01:43.696 graph: explicitly disabled via build config 00:01:43.696 pdump: explicitly disabled via build config 00:01:43.696 proc-info: explicitly disabled via build config 00:01:43.696 test-acl: explicitly disabled via build config 00:01:43.696 test-bbdev: explicitly disabled via build config 00:01:43.696 test-cmdline: explicitly disabled via build config 00:01:43.696 test-compress-perf: explicitly disabled via build config 00:01:43.696 test-crypto-perf: explicitly disabled via build config 00:01:43.696 test-dma-perf: explicitly disabled via build config 00:01:43.696 test-eventdev: explicitly disabled via build config 00:01:43.696 test-fib: explicitly disabled via build config 00:01:43.696 test-flow-perf: explicitly disabled via build config 00:01:43.696 test-gpudev: explicitly disabled via build config 00:01:43.696 test-mldev: explicitly disabled via build config 00:01:43.696 test-pipeline: explicitly disabled via build config 00:01:43.696 test-pmd: explicitly disabled via build config 00:01:43.696 test-regex: explicitly disabled via build config 00:01:43.696 test-sad: explicitly disabled via build config 00:01:43.696 test-security-perf: explicitly disabled via build config 00:01:43.696 00:01:43.696 libs: 00:01:43.696 argparse: explicitly disabled via build config 00:01:43.696 metrics: explicitly disabled via build config 00:01:43.696 acl: explicitly disabled via build config 00:01:43.696 bbdev: explicitly disabled via build config 00:01:43.696 bitratestats: explicitly disabled via build config 00:01:43.696 bpf: explicitly disabled via build config 00:01:43.696 cfgfile: explicitly disabled via build config 00:01:43.696 distributor: explicitly disabled via build config 00:01:43.696 efd: explicitly disabled via build config 00:01:43.696 eventdev: explicitly disabled via build config 00:01:43.696 dispatcher: explicitly disabled via build config 00:01:43.696 gpudev: explicitly disabled via build config 00:01:43.696 gro: explicitly disabled via build config 00:01:43.696 gso: explicitly disabled via build config 00:01:43.696 ip_frag: explicitly disabled via build config 00:01:43.696 jobstats: explicitly disabled via build config 00:01:43.696 latencystats: explicitly disabled via build config 00:01:43.696 lpm: explicitly disabled via build config 00:01:43.696 member: explicitly disabled via build config 00:01:43.696 pcapng: explicitly disabled via build config 00:01:43.696 rawdev: explicitly disabled via build config 00:01:43.696 regexdev: explicitly disabled via build config 00:01:43.696 mldev: explicitly disabled via build config 00:01:43.696 rib: explicitly disabled via build config 00:01:43.696 sched: explicitly disabled via build config 00:01:43.696 stack: explicitly disabled via build config 00:01:43.696 ipsec: explicitly disabled via build config 00:01:43.696 pdcp: explicitly disabled via build config 00:01:43.696 fib: explicitly disabled via build config 00:01:43.696 port: explicitly disabled via build config 00:01:43.696 pdump: explicitly disabled via build config 00:01:43.696 table: explicitly disabled via build config 00:01:43.696 pipeline: explicitly disabled via build config 00:01:43.696 graph: explicitly disabled via build config 00:01:43.696 node: explicitly disabled via build config 00:01:43.696 00:01:43.696 drivers: 00:01:43.696 common/cpt: not in enabled drivers build config 00:01:43.696 common/dpaax: not in enabled drivers build config 00:01:43.696 common/iavf: not in enabled drivers build config 00:01:43.696 common/idpf: not in enabled drivers build config 00:01:43.696 common/ionic: not in enabled drivers build config 00:01:43.696 common/mvep: not in enabled drivers build config 00:01:43.696 common/octeontx: not in enabled drivers build config 00:01:43.696 bus/auxiliary: not in enabled drivers build config 00:01:43.696 bus/cdx: not in enabled drivers build config 00:01:43.696 bus/dpaa: not in enabled drivers build config 00:01:43.696 bus/fslmc: not in enabled drivers build config 00:01:43.696 bus/ifpga: not in enabled drivers build config 00:01:43.696 bus/platform: not in enabled drivers build config 00:01:43.696 bus/uacce: not in enabled drivers build config 00:01:43.696 bus/vmbus: not in enabled drivers build config 00:01:43.696 common/cnxk: not in enabled drivers build config 00:01:43.696 common/mlx5: not in enabled drivers build config 00:01:43.696 common/nfp: not in enabled drivers build config 00:01:43.696 common/nitrox: not in enabled drivers build config 00:01:43.696 common/qat: not in enabled drivers build config 00:01:43.696 common/sfc_efx: not in enabled drivers build config 00:01:43.696 mempool/bucket: not in enabled drivers build config 00:01:43.696 mempool/cnxk: not in enabled drivers build config 00:01:43.696 mempool/dpaa: not in enabled drivers build config 00:01:43.696 mempool/dpaa2: not in enabled drivers build config 00:01:43.696 mempool/octeontx: not in enabled drivers build config 00:01:43.696 mempool/stack: not in enabled drivers build config 00:01:43.696 dma/cnxk: not in enabled drivers build config 00:01:43.696 dma/dpaa: not in enabled drivers build config 00:01:43.696 dma/dpaa2: not in enabled drivers build config 00:01:43.696 dma/hisilicon: not in enabled drivers build config 00:01:43.696 dma/idxd: not in enabled drivers build config 00:01:43.696 dma/ioat: not in enabled drivers build config 00:01:43.696 dma/skeleton: not in enabled drivers build config 00:01:43.696 net/af_packet: not in enabled drivers build config 00:01:43.696 net/af_xdp: not in enabled drivers build config 00:01:43.696 net/ark: not in enabled drivers build config 00:01:43.696 net/atlantic: not in enabled drivers build config 00:01:43.696 net/avp: not in enabled drivers build config 00:01:43.696 net/axgbe: not in enabled drivers build config 00:01:43.696 net/bnx2x: not in enabled drivers build config 00:01:43.697 net/bnxt: not in enabled drivers build config 00:01:43.697 net/bonding: not in enabled drivers build config 00:01:43.697 net/cnxk: not in enabled drivers build config 00:01:43.697 net/cpfl: not in enabled drivers build config 00:01:43.697 net/cxgbe: not in enabled drivers build config 00:01:43.697 net/dpaa: not in enabled drivers build config 00:01:43.697 net/dpaa2: not in enabled drivers build config 00:01:43.697 net/e1000: not in enabled drivers build config 00:01:43.697 net/ena: not in enabled drivers build config 00:01:43.697 net/enetc: not in enabled drivers build config 00:01:43.697 net/enetfec: not in enabled drivers build config 00:01:43.697 net/enic: not in enabled drivers build config 00:01:43.697 net/failsafe: not in enabled drivers build config 00:01:43.697 net/fm10k: not in enabled drivers build config 00:01:43.697 net/gve: not in enabled drivers build config 00:01:43.697 net/hinic: not in enabled drivers build config 00:01:43.697 net/hns3: not in enabled drivers build config 00:01:43.697 net/i40e: not in enabled drivers build config 00:01:43.697 net/iavf: not in enabled drivers build config 00:01:43.697 net/ice: not in enabled drivers build config 00:01:43.697 net/idpf: not in enabled drivers build config 00:01:43.697 net/igc: not in enabled drivers build config 00:01:43.697 net/ionic: not in enabled drivers build config 00:01:43.697 net/ipn3ke: not in enabled drivers build config 00:01:43.697 net/ixgbe: not in enabled drivers build config 00:01:43.697 net/mana: not in enabled drivers build config 00:01:43.697 net/memif: not in enabled drivers build config 00:01:43.697 net/mlx4: not in enabled drivers build config 00:01:43.697 net/mlx5: not in enabled drivers build config 00:01:43.697 net/mvneta: not in enabled drivers build config 00:01:43.697 net/mvpp2: not in enabled drivers build config 00:01:43.697 net/netvsc: not in enabled drivers build config 00:01:43.697 net/nfb: not in enabled drivers build config 00:01:43.697 net/nfp: not in enabled drivers build config 00:01:43.697 net/ngbe: not in enabled drivers build config 00:01:43.697 net/null: not in enabled drivers build config 00:01:43.697 net/octeontx: not in enabled drivers build config 00:01:43.697 net/octeon_ep: not in enabled drivers build config 00:01:43.697 net/pcap: not in enabled drivers build config 00:01:43.697 net/pfe: not in enabled drivers build config 00:01:43.697 net/qede: not in enabled drivers build config 00:01:43.697 net/ring: not in enabled drivers build config 00:01:43.697 net/sfc: not in enabled drivers build config 00:01:43.697 net/softnic: not in enabled drivers build config 00:01:43.697 net/tap: not in enabled drivers build config 00:01:43.697 net/thunderx: not in enabled drivers build config 00:01:43.697 net/txgbe: not in enabled drivers build config 00:01:43.697 net/vdev_netvsc: not in enabled drivers build config 00:01:43.697 net/vhost: not in enabled drivers build config 00:01:43.697 net/virtio: not in enabled drivers build config 00:01:43.697 net/vmxnet3: not in enabled drivers build config 00:01:43.697 raw/*: missing internal dependency, "rawdev" 00:01:43.697 crypto/armv8: not in enabled drivers build config 00:01:43.697 crypto/bcmfs: not in enabled drivers build config 00:01:43.697 crypto/caam_jr: not in enabled drivers build config 00:01:43.697 crypto/ccp: not in enabled drivers build config 00:01:43.697 crypto/cnxk: not in enabled drivers build config 00:01:43.697 crypto/dpaa_sec: not in enabled drivers build config 00:01:43.697 crypto/dpaa2_sec: not in enabled drivers build config 00:01:43.697 crypto/ipsec_mb: not in enabled drivers build config 00:01:43.697 crypto/mlx5: not in enabled drivers build config 00:01:43.697 crypto/mvsam: not in enabled drivers build config 00:01:43.697 crypto/nitrox: not in enabled drivers build config 00:01:43.697 crypto/null: not in enabled drivers build config 00:01:43.697 crypto/octeontx: not in enabled drivers build config 00:01:43.697 crypto/openssl: not in enabled drivers build config 00:01:43.697 crypto/scheduler: not in enabled drivers build config 00:01:43.697 crypto/uadk: not in enabled drivers build config 00:01:43.697 crypto/virtio: not in enabled drivers build config 00:01:43.697 compress/isal: not in enabled drivers build config 00:01:43.697 compress/mlx5: not in enabled drivers build config 00:01:43.697 compress/nitrox: not in enabled drivers build config 00:01:43.697 compress/octeontx: not in enabled drivers build config 00:01:43.697 compress/zlib: not in enabled drivers build config 00:01:43.697 regex/*: missing internal dependency, "regexdev" 00:01:43.697 ml/*: missing internal dependency, "mldev" 00:01:43.697 vdpa/ifc: not in enabled drivers build config 00:01:43.697 vdpa/mlx5: not in enabled drivers build config 00:01:43.697 vdpa/nfp: not in enabled drivers build config 00:01:43.697 vdpa/sfc: not in enabled drivers build config 00:01:43.697 event/*: missing internal dependency, "eventdev" 00:01:43.697 baseband/*: missing internal dependency, "bbdev" 00:01:43.697 gpu/*: missing internal dependency, "gpudev" 00:01:43.697 00:01:43.697 00:01:43.697 Build targets in project: 85 00:01:43.697 00:01:43.697 DPDK 24.03.0 00:01:43.697 00:01:43.697 User defined options 00:01:43.697 buildtype : debug 00:01:43.697 default_library : shared 00:01:43.697 libdir : lib 00:01:43.697 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:43.697 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:43.697 c_link_args : 00:01:43.697 cpu_instruction_set: native 00:01:43.697 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:43.697 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:43.697 enable_docs : false 00:01:43.697 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:43.697 enable_kmods : false 00:01:43.697 max_lcores : 128 00:01:43.697 tests : false 00:01:43.697 00:01:43.697 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:44.268 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:44.268 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:44.268 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:44.268 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:44.268 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:44.268 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:44.268 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:44.268 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:44.268 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:44.268 [9/268] Linking static target lib/librte_kvargs.a 00:01:44.268 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:44.268 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:44.268 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:44.268 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:44.268 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:44.268 [15/268] Linking static target lib/librte_log.a 00:01:44.530 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:45.106 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.106 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:45.106 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:45.106 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:45.106 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:45.106 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:45.106 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:45.106 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:45.106 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:45.106 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:45.106 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:45.106 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:45.106 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:45.106 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:45.106 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:45.106 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:45.106 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:45.106 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:45.106 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:45.106 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:45.106 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:45.106 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:45.106 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:45.106 [40/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:45.106 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:45.106 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:45.106 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:45.106 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:45.106 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:45.106 [46/268] Linking static target lib/librte_telemetry.a 00:01:45.106 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:45.106 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:45.106 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:45.106 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:45.106 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:45.106 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:45.365 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:45.365 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:45.365 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:45.365 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:45.365 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:45.365 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:45.365 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:45.365 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:45.365 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:45.365 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:45.365 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:45.365 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:45.626 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:45.626 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:45.626 [67/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.626 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:45.626 [69/268] Linking static target lib/librte_pci.a 00:01:45.887 [70/268] Linking target lib/librte_log.so.24.1 00:01:45.887 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:45.887 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:45.887 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:45.887 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:45.887 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:45.887 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:46.150 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:46.150 [78/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:46.150 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:46.150 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:46.150 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:46.150 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:46.150 [83/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:46.150 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:46.150 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:46.150 [86/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:46.150 [87/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:46.150 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:46.150 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:46.150 [90/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:46.150 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:46.150 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:46.150 [93/268] Linking static target lib/librte_ring.a 00:01:46.150 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:46.150 [95/268] Linking static target lib/librte_meter.a 00:01:46.150 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:46.150 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:46.150 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:46.150 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:46.150 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:46.150 [101/268] Linking target lib/librte_kvargs.so.24.1 00:01:46.150 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:46.150 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:46.150 [104/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.150 [105/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.150 [106/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:46.150 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:46.150 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:46.150 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:46.150 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:46.409 [111/268] Linking static target lib/librte_eal.a 00:01:46.409 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:46.409 [113/268] Linking target lib/librte_telemetry.so.24.1 00:01:46.409 [114/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:46.409 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:46.409 [116/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:46.409 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:46.409 [118/268] Linking static target lib/librte_mempool.a 00:01:46.409 [119/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:46.409 [120/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:46.409 [121/268] Linking static target lib/librte_rcu.a 00:01:46.409 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:46.409 [123/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:46.409 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:46.671 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:46.671 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:46.671 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:46.671 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:46.671 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:46.671 [130/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:46.671 [131/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:46.671 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:46.671 [133/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:46.671 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:46.671 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.951 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:46.951 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.951 [138/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:46.951 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:46.951 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:46.951 [141/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:46.951 [142/268] Linking static target lib/librte_net.a 00:01:46.951 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:46.951 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:47.267 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:47.267 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:47.267 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:47.267 [148/268] Linking static target lib/librte_cmdline.a 00:01:47.267 [149/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.267 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:47.267 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:47.267 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:47.267 [153/268] Linking static target lib/librte_timer.a 00:01:47.267 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:47.267 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:47.267 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:47.267 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:47.267 [158/268] Linking static target lib/librte_dmadev.a 00:01:47.267 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:47.267 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:47.267 [161/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.267 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:47.553 [163/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:47.553 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:47.553 [165/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:47.553 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:47.553 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:47.553 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.553 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:47.553 [170/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:47.553 [171/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:47.553 [172/268] Linking static target lib/librte_power.a 00:01:47.553 [173/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.554 [174/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:47.554 [175/268] Linking static target lib/librte_compressdev.a 00:01:47.823 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:47.823 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:47.823 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:47.823 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:47.823 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:47.823 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:47.823 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:47.823 [183/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:47.823 [184/268] Linking static target lib/librte_hash.a 00:01:47.823 [185/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.823 [186/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:47.823 [187/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:47.823 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:47.823 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:47.823 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:47.823 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:47.823 [192/268] Linking static target lib/librte_reorder.a 00:01:48.080 [193/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:48.080 [194/268] Linking static target lib/librte_mbuf.a 00:01:48.080 [195/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.080 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:48.080 [197/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:48.080 [198/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.080 [199/268] Linking static target lib/librte_security.a 00:01:48.080 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:48.080 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.080 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.080 [203/268] Linking static target drivers/librte_bus_vdev.a 00:01:48.080 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:48.080 [205/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.080 [206/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:48.080 [207/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:48.080 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:48.339 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.339 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.339 [211/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:48.339 [212/268] Linking static target drivers/librte_bus_pci.a 00:01:48.339 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.339 [214/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.339 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.339 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:48.339 [217/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:48.339 [218/268] Linking static target lib/librte_ethdev.a 00:01:48.339 [219/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.339 [220/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.339 [221/268] Linking static target drivers/librte_mempool_ring.a 00:01:48.339 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.597 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.597 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:48.597 [225/268] Linking static target lib/librte_cryptodev.a 00:01:48.597 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.971 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.905 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:52.805 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.805 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.805 [231/268] Linking target lib/librte_eal.so.24.1 00:01:53.063 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:53.063 [233/268] Linking target lib/librte_timer.so.24.1 00:01:53.063 [234/268] Linking target lib/librte_meter.so.24.1 00:01:53.063 [235/268] Linking target lib/librte_ring.so.24.1 00:01:53.063 [236/268] Linking target lib/librte_pci.so.24.1 00:01:53.064 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:53.064 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:53.064 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:53.064 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:53.064 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:53.064 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:53.064 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:53.322 [244/268] Linking target lib/librte_mempool.so.24.1 00:01:53.322 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:53.322 [246/268] Linking target lib/librte_rcu.so.24.1 00:01:53.322 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:53.322 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:53.322 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:53.322 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:53.579 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:53.579 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:53.579 [253/268] Linking target lib/librte_net.so.24.1 00:01:53.579 [254/268] Linking target lib/librte_compressdev.so.24.1 00:01:53.579 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:53.579 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:53.579 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:53.837 [258/268] Linking target lib/librte_cmdline.so.24.1 00:01:53.837 [259/268] Linking target lib/librte_hash.so.24.1 00:01:53.837 [260/268] Linking target lib/librte_security.so.24.1 00:01:53.837 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:53.837 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:53.837 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:53.837 [264/268] Linking target lib/librte_power.so.24.1 00:01:57.115 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:57.115 [266/268] Linking static target lib/librte_vhost.a 00:01:58.047 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.047 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:58.047 INFO: autodetecting backend as ninja 00:01:58.047 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:19.961 CC lib/log/log.o 00:02:19.961 CC lib/ut_mock/mock.o 00:02:19.961 CC lib/log/log_flags.o 00:02:19.961 CC lib/log/log_deprecated.o 00:02:19.961 CC lib/ut/ut.o 00:02:19.962 LIB libspdk_ut.a 00:02:19.962 LIB libspdk_log.a 00:02:19.962 LIB libspdk_ut_mock.a 00:02:19.962 SO libspdk_ut.so.2.0 00:02:19.962 SO libspdk_ut_mock.so.6.0 00:02:19.962 SO libspdk_log.so.7.1 00:02:19.962 SYMLINK libspdk_ut.so 00:02:19.962 SYMLINK libspdk_ut_mock.so 00:02:19.962 SYMLINK libspdk_log.so 00:02:19.962 CC lib/dma/dma.o 00:02:19.962 CXX lib/trace_parser/trace.o 00:02:19.962 CC lib/ioat/ioat.o 00:02:19.962 CC lib/util/base64.o 00:02:19.962 CC lib/util/bit_array.o 00:02:19.962 CC lib/util/cpuset.o 00:02:19.962 CC lib/util/crc16.o 00:02:19.962 CC lib/util/crc32.o 00:02:19.962 CC lib/util/crc32c.o 00:02:19.962 CC lib/util/crc32_ieee.o 00:02:19.962 CC lib/util/crc64.o 00:02:19.962 CC lib/util/dif.o 00:02:19.962 CC lib/util/fd.o 00:02:19.962 CC lib/util/fd_group.o 00:02:19.962 CC lib/util/file.o 00:02:19.962 CC lib/util/hexlify.o 00:02:19.962 CC lib/util/iov.o 00:02:19.962 CC lib/util/math.o 00:02:19.962 CC lib/util/net.o 00:02:19.962 CC lib/util/pipe.o 00:02:19.962 CC lib/util/strerror_tls.o 00:02:19.962 CC lib/util/string.o 00:02:19.962 CC lib/util/uuid.o 00:02:19.962 CC lib/util/xor.o 00:02:19.962 CC lib/util/md5.o 00:02:19.962 CC lib/util/zipf.o 00:02:19.962 CC lib/vfio_user/host/vfio_user_pci.o 00:02:19.962 CC lib/vfio_user/host/vfio_user.o 00:02:19.962 LIB libspdk_dma.a 00:02:19.962 SO libspdk_dma.so.5.0 00:02:19.962 SYMLINK libspdk_dma.so 00:02:19.962 LIB libspdk_ioat.a 00:02:19.962 SO libspdk_ioat.so.7.0 00:02:19.962 SYMLINK libspdk_ioat.so 00:02:19.962 LIB libspdk_vfio_user.a 00:02:19.962 SO libspdk_vfio_user.so.5.0 00:02:19.962 SYMLINK libspdk_vfio_user.so 00:02:19.962 LIB libspdk_util.a 00:02:19.962 SO libspdk_util.so.10.1 00:02:19.962 SYMLINK libspdk_util.so 00:02:19.962 CC lib/rdma_utils/rdma_utils.o 00:02:19.962 CC lib/env_dpdk/env.o 00:02:19.962 CC lib/idxd/idxd.o 00:02:19.962 CC lib/json/json_parse.o 00:02:19.962 CC lib/conf/conf.o 00:02:19.962 CC lib/idxd/idxd_user.o 00:02:19.962 CC lib/vmd/vmd.o 00:02:19.962 CC lib/json/json_util.o 00:02:19.962 CC lib/env_dpdk/memory.o 00:02:19.962 CC lib/idxd/idxd_kernel.o 00:02:19.962 CC lib/json/json_write.o 00:02:19.962 CC lib/vmd/led.o 00:02:19.962 CC lib/env_dpdk/pci.o 00:02:19.962 CC lib/env_dpdk/init.o 00:02:19.962 CC lib/env_dpdk/threads.o 00:02:19.962 CC lib/env_dpdk/pci_ioat.o 00:02:19.962 CC lib/env_dpdk/pci_virtio.o 00:02:19.962 CC lib/env_dpdk/pci_vmd.o 00:02:19.962 CC lib/env_dpdk/pci_idxd.o 00:02:19.962 CC lib/env_dpdk/pci_event.o 00:02:19.962 CC lib/env_dpdk/pci_dpdk.o 00:02:19.962 CC lib/env_dpdk/sigbus_handler.o 00:02:19.962 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:19.962 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:19.962 LIB libspdk_trace_parser.a 00:02:19.962 SO libspdk_trace_parser.so.6.0 00:02:19.962 SYMLINK libspdk_trace_parser.so 00:02:19.962 LIB libspdk_conf.a 00:02:19.962 SO libspdk_conf.so.6.0 00:02:19.962 LIB libspdk_rdma_utils.a 00:02:19.962 SYMLINK libspdk_conf.so 00:02:19.962 LIB libspdk_json.a 00:02:19.962 SO libspdk_rdma_utils.so.1.0 00:02:19.962 SO libspdk_json.so.6.0 00:02:19.962 SYMLINK libspdk_rdma_utils.so 00:02:19.962 SYMLINK libspdk_json.so 00:02:19.962 CC lib/rdma_provider/common.o 00:02:19.962 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:19.962 CC lib/jsonrpc/jsonrpc_server.o 00:02:19.962 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:19.962 CC lib/jsonrpc/jsonrpc_client.o 00:02:19.962 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:19.962 LIB libspdk_idxd.a 00:02:19.962 SO libspdk_idxd.so.12.1 00:02:19.962 LIB libspdk_vmd.a 00:02:19.962 SYMLINK libspdk_idxd.so 00:02:19.962 SO libspdk_vmd.so.6.0 00:02:19.962 SYMLINK libspdk_vmd.so 00:02:20.231 LIB libspdk_rdma_provider.a 00:02:20.231 SO libspdk_rdma_provider.so.7.0 00:02:20.231 LIB libspdk_jsonrpc.a 00:02:20.231 SO libspdk_jsonrpc.so.6.0 00:02:20.231 SYMLINK libspdk_rdma_provider.so 00:02:20.231 SYMLINK libspdk_jsonrpc.so 00:02:20.488 CC lib/rpc/rpc.o 00:02:20.745 LIB libspdk_rpc.a 00:02:20.745 SO libspdk_rpc.so.6.0 00:02:20.745 SYMLINK libspdk_rpc.so 00:02:20.745 CC lib/notify/notify.o 00:02:20.745 CC lib/notify/notify_rpc.o 00:02:20.745 CC lib/keyring/keyring.o 00:02:20.745 CC lib/trace/trace.o 00:02:20.745 CC lib/keyring/keyring_rpc.o 00:02:20.745 CC lib/trace/trace_flags.o 00:02:20.745 CC lib/trace/trace_rpc.o 00:02:21.002 LIB libspdk_notify.a 00:02:21.002 SO libspdk_notify.so.6.0 00:02:21.002 SYMLINK libspdk_notify.so 00:02:21.002 LIB libspdk_keyring.a 00:02:21.259 LIB libspdk_trace.a 00:02:21.259 SO libspdk_keyring.so.2.0 00:02:21.259 SO libspdk_trace.so.11.0 00:02:21.259 SYMLINK libspdk_keyring.so 00:02:21.259 SYMLINK libspdk_trace.so 00:02:21.259 LIB libspdk_env_dpdk.a 00:02:21.259 CC lib/thread/thread.o 00:02:21.259 CC lib/thread/iobuf.o 00:02:21.259 CC lib/sock/sock.o 00:02:21.259 CC lib/sock/sock_rpc.o 00:02:21.516 SO libspdk_env_dpdk.so.15.1 00:02:21.516 SYMLINK libspdk_env_dpdk.so 00:02:21.774 LIB libspdk_sock.a 00:02:21.774 SO libspdk_sock.so.10.0 00:02:21.774 SYMLINK libspdk_sock.so 00:02:22.032 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:22.032 CC lib/nvme/nvme_ctrlr.o 00:02:22.032 CC lib/nvme/nvme_fabric.o 00:02:22.032 CC lib/nvme/nvme_ns_cmd.o 00:02:22.032 CC lib/nvme/nvme_ns.o 00:02:22.032 CC lib/nvme/nvme_pcie_common.o 00:02:22.032 CC lib/nvme/nvme_pcie.o 00:02:22.032 CC lib/nvme/nvme_qpair.o 00:02:22.032 CC lib/nvme/nvme.o 00:02:22.032 CC lib/nvme/nvme_quirks.o 00:02:22.032 CC lib/nvme/nvme_transport.o 00:02:22.032 CC lib/nvme/nvme_discovery.o 00:02:22.032 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:22.032 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:22.032 CC lib/nvme/nvme_tcp.o 00:02:22.032 CC lib/nvme/nvme_opal.o 00:02:22.032 CC lib/nvme/nvme_io_msg.o 00:02:22.032 CC lib/nvme/nvme_poll_group.o 00:02:22.032 CC lib/nvme/nvme_zns.o 00:02:22.032 CC lib/nvme/nvme_stubs.o 00:02:22.032 CC lib/nvme/nvme_auth.o 00:02:22.032 CC lib/nvme/nvme_cuse.o 00:02:22.032 CC lib/nvme/nvme_rdma.o 00:02:22.032 CC lib/nvme/nvme_vfio_user.o 00:02:22.967 LIB libspdk_thread.a 00:02:22.967 SO libspdk_thread.so.11.0 00:02:23.225 SYMLINK libspdk_thread.so 00:02:23.225 CC lib/vfu_tgt/tgt_endpoint.o 00:02:23.225 CC lib/init/json_config.o 00:02:23.225 CC lib/virtio/virtio.o 00:02:23.225 CC lib/blob/blobstore.o 00:02:23.225 CC lib/vfu_tgt/tgt_rpc.o 00:02:23.225 CC lib/init/subsystem.o 00:02:23.225 CC lib/virtio/virtio_vhost_user.o 00:02:23.225 CC lib/blob/request.o 00:02:23.225 CC lib/init/subsystem_rpc.o 00:02:23.225 CC lib/blob/zeroes.o 00:02:23.225 CC lib/virtio/virtio_vfio_user.o 00:02:23.225 CC lib/init/rpc.o 00:02:23.225 CC lib/virtio/virtio_pci.o 00:02:23.225 CC lib/blob/blob_bs_dev.o 00:02:23.225 CC lib/accel/accel.o 00:02:23.225 CC lib/fsdev/fsdev.o 00:02:23.225 CC lib/accel/accel_rpc.o 00:02:23.225 CC lib/fsdev/fsdev_io.o 00:02:23.225 CC lib/accel/accel_sw.o 00:02:23.225 CC lib/fsdev/fsdev_rpc.o 00:02:23.483 LIB libspdk_init.a 00:02:23.483 SO libspdk_init.so.6.0 00:02:23.741 LIB libspdk_virtio.a 00:02:23.741 SYMLINK libspdk_init.so 00:02:23.741 LIB libspdk_vfu_tgt.a 00:02:23.741 SO libspdk_virtio.so.7.0 00:02:23.741 SO libspdk_vfu_tgt.so.3.0 00:02:23.741 SYMLINK libspdk_virtio.so 00:02:23.741 SYMLINK libspdk_vfu_tgt.so 00:02:23.741 CC lib/event/app.o 00:02:23.741 CC lib/event/reactor.o 00:02:23.741 CC lib/event/log_rpc.o 00:02:23.741 CC lib/event/app_rpc.o 00:02:23.741 CC lib/event/scheduler_static.o 00:02:23.999 LIB libspdk_fsdev.a 00:02:23.999 SO libspdk_fsdev.so.2.0 00:02:23.999 SYMLINK libspdk_fsdev.so 00:02:24.257 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:24.257 LIB libspdk_event.a 00:02:24.257 SO libspdk_event.so.14.0 00:02:24.257 SYMLINK libspdk_event.so 00:02:24.515 LIB libspdk_accel.a 00:02:24.515 SO libspdk_accel.so.16.0 00:02:24.515 LIB libspdk_nvme.a 00:02:24.515 SYMLINK libspdk_accel.so 00:02:24.772 SO libspdk_nvme.so.15.0 00:02:24.772 CC lib/bdev/bdev.o 00:02:24.772 CC lib/bdev/bdev_rpc.o 00:02:24.773 CC lib/bdev/bdev_zone.o 00:02:24.773 CC lib/bdev/part.o 00:02:24.773 CC lib/bdev/scsi_nvme.o 00:02:24.773 SYMLINK libspdk_nvme.so 00:02:25.030 LIB libspdk_fuse_dispatcher.a 00:02:25.030 SO libspdk_fuse_dispatcher.so.1.0 00:02:25.030 SYMLINK libspdk_fuse_dispatcher.so 00:02:26.404 LIB libspdk_blob.a 00:02:26.404 SO libspdk_blob.so.11.0 00:02:26.662 SYMLINK libspdk_blob.so 00:02:26.662 CC lib/blobfs/blobfs.o 00:02:26.662 CC lib/blobfs/tree.o 00:02:26.662 CC lib/lvol/lvol.o 00:02:27.603 LIB libspdk_bdev.a 00:02:27.603 SO libspdk_bdev.so.17.0 00:02:27.603 LIB libspdk_blobfs.a 00:02:27.603 SO libspdk_blobfs.so.10.0 00:02:27.603 SYMLINK libspdk_bdev.so 00:02:27.603 SYMLINK libspdk_blobfs.so 00:02:27.603 LIB libspdk_lvol.a 00:02:27.603 SO libspdk_lvol.so.10.0 00:02:27.603 CC lib/scsi/dev.o 00:02:27.603 CC lib/ublk/ublk.o 00:02:27.603 CC lib/nbd/nbd.o 00:02:27.603 CC lib/scsi/lun.o 00:02:27.603 CC lib/ublk/ublk_rpc.o 00:02:27.603 CC lib/nvmf/ctrlr.o 00:02:27.603 CC lib/scsi/port.o 00:02:27.603 CC lib/nbd/nbd_rpc.o 00:02:27.603 CC lib/ftl/ftl_core.o 00:02:27.603 CC lib/nvmf/ctrlr_discovery.o 00:02:27.603 CC lib/scsi/scsi.o 00:02:27.603 CC lib/nvmf/ctrlr_bdev.o 00:02:27.603 CC lib/scsi/scsi_bdev.o 00:02:27.603 CC lib/ftl/ftl_init.o 00:02:27.603 CC lib/nvmf/subsystem.o 00:02:27.603 CC lib/scsi/scsi_pr.o 00:02:27.603 CC lib/ftl/ftl_layout.o 00:02:27.603 CC lib/nvmf/nvmf.o 00:02:27.603 CC lib/scsi/scsi_rpc.o 00:02:27.603 CC lib/nvmf/nvmf_rpc.o 00:02:27.603 CC lib/ftl/ftl_debug.o 00:02:27.603 CC lib/scsi/task.o 00:02:27.603 CC lib/ftl/ftl_io.o 00:02:27.603 CC lib/ftl/ftl_sb.o 00:02:27.603 CC lib/ftl/ftl_l2p.o 00:02:27.603 CC lib/nvmf/transport.o 00:02:27.603 CC lib/nvmf/tcp.o 00:02:27.603 CC lib/ftl/ftl_l2p_flat.o 00:02:27.603 CC lib/nvmf/mdns_server.o 00:02:27.603 CC lib/nvmf/stubs.o 00:02:27.603 CC lib/ftl/ftl_nv_cache.o 00:02:27.603 CC lib/nvmf/vfio_user.o 00:02:27.603 CC lib/ftl/ftl_band.o 00:02:27.603 CC lib/nvmf/rdma.o 00:02:27.603 CC lib/ftl/ftl_band_ops.o 00:02:27.603 CC lib/nvmf/auth.o 00:02:27.603 CC lib/ftl/ftl_writer.o 00:02:27.603 CC lib/ftl/ftl_reloc.o 00:02:27.603 CC lib/ftl/ftl_rq.o 00:02:27.603 CC lib/ftl/ftl_l2p_cache.o 00:02:27.603 CC lib/ftl/ftl_p2l.o 00:02:27.603 CC lib/ftl/ftl_p2l_log.o 00:02:27.603 CC lib/ftl/mngt/ftl_mngt.o 00:02:27.603 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:27.603 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:27.603 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:27.603 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:27.867 SYMLINK libspdk_lvol.so 00:02:27.867 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:28.128 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:28.128 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:28.128 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:28.128 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:28.128 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:28.128 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:28.128 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:28.128 CC lib/ftl/utils/ftl_conf.o 00:02:28.128 CC lib/ftl/utils/ftl_md.o 00:02:28.128 CC lib/ftl/utils/ftl_mempool.o 00:02:28.128 CC lib/ftl/utils/ftl_bitmap.o 00:02:28.128 CC lib/ftl/utils/ftl_property.o 00:02:28.128 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:28.128 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:28.128 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:28.128 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:28.388 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:28.388 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:28.388 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:28.388 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:28.388 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:28.388 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:28.388 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:28.388 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:28.388 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:28.388 CC lib/ftl/base/ftl_base_dev.o 00:02:28.388 CC lib/ftl/base/ftl_base_bdev.o 00:02:28.388 CC lib/ftl/ftl_trace.o 00:02:28.645 LIB libspdk_nbd.a 00:02:28.645 SO libspdk_nbd.so.7.0 00:02:28.645 LIB libspdk_scsi.a 00:02:28.645 SO libspdk_scsi.so.9.0 00:02:28.645 SYMLINK libspdk_nbd.so 00:02:28.903 SYMLINK libspdk_scsi.so 00:02:28.903 LIB libspdk_ublk.a 00:02:28.903 SO libspdk_ublk.so.3.0 00:02:28.903 CC lib/iscsi/conn.o 00:02:28.903 CC lib/vhost/vhost.o 00:02:28.903 CC lib/iscsi/init_grp.o 00:02:28.903 CC lib/vhost/vhost_rpc.o 00:02:28.903 CC lib/iscsi/iscsi.o 00:02:28.903 CC lib/vhost/vhost_scsi.o 00:02:28.903 CC lib/iscsi/param.o 00:02:28.903 CC lib/vhost/vhost_blk.o 00:02:28.903 CC lib/iscsi/portal_grp.o 00:02:28.903 SYMLINK libspdk_ublk.so 00:02:28.903 CC lib/vhost/rte_vhost_user.o 00:02:28.903 CC lib/iscsi/tgt_node.o 00:02:28.903 CC lib/iscsi/iscsi_subsystem.o 00:02:28.903 CC lib/iscsi/iscsi_rpc.o 00:02:28.903 CC lib/iscsi/task.o 00:02:29.162 LIB libspdk_ftl.a 00:02:29.422 SO libspdk_ftl.so.9.0 00:02:29.735 SYMLINK libspdk_ftl.so 00:02:30.327 LIB libspdk_vhost.a 00:02:30.327 SO libspdk_vhost.so.8.0 00:02:30.327 SYMLINK libspdk_vhost.so 00:02:30.327 LIB libspdk_nvmf.a 00:02:30.586 LIB libspdk_iscsi.a 00:02:30.586 SO libspdk_nvmf.so.20.0 00:02:30.586 SO libspdk_iscsi.so.8.0 00:02:30.586 SYMLINK libspdk_iscsi.so 00:02:30.586 SYMLINK libspdk_nvmf.so 00:02:30.844 CC module/vfu_device/vfu_virtio.o 00:02:30.844 CC module/env_dpdk/env_dpdk_rpc.o 00:02:30.844 CC module/vfu_device/vfu_virtio_blk.o 00:02:30.844 CC module/vfu_device/vfu_virtio_scsi.o 00:02:30.844 CC module/vfu_device/vfu_virtio_rpc.o 00:02:30.844 CC module/vfu_device/vfu_virtio_fs.o 00:02:31.102 CC module/accel/error/accel_error.o 00:02:31.102 CC module/keyring/linux/keyring.o 00:02:31.102 CC module/accel/error/accel_error_rpc.o 00:02:31.102 CC module/scheduler/gscheduler/gscheduler.o 00:02:31.102 CC module/accel/dsa/accel_dsa.o 00:02:31.102 CC module/sock/posix/posix.o 00:02:31.102 CC module/keyring/file/keyring.o 00:02:31.102 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:31.102 CC module/keyring/file/keyring_rpc.o 00:02:31.102 CC module/accel/dsa/accel_dsa_rpc.o 00:02:31.102 CC module/blob/bdev/blob_bdev.o 00:02:31.102 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:31.102 CC module/keyring/linux/keyring_rpc.o 00:02:31.102 CC module/accel/iaa/accel_iaa.o 00:02:31.102 CC module/accel/ioat/accel_ioat.o 00:02:31.102 CC module/accel/iaa/accel_iaa_rpc.o 00:02:31.102 CC module/accel/ioat/accel_ioat_rpc.o 00:02:31.102 CC module/fsdev/aio/fsdev_aio.o 00:02:31.102 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:31.102 CC module/fsdev/aio/linux_aio_mgr.o 00:02:31.102 LIB libspdk_env_dpdk_rpc.a 00:02:31.102 SO libspdk_env_dpdk_rpc.so.6.0 00:02:31.102 SYMLINK libspdk_env_dpdk_rpc.so 00:02:31.102 LIB libspdk_keyring_linux.a 00:02:31.102 LIB libspdk_keyring_file.a 00:02:31.102 LIB libspdk_scheduler_gscheduler.a 00:02:31.102 LIB libspdk_scheduler_dpdk_governor.a 00:02:31.360 SO libspdk_keyring_linux.so.1.0 00:02:31.360 SO libspdk_scheduler_gscheduler.so.4.0 00:02:31.360 SO libspdk_keyring_file.so.2.0 00:02:31.360 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:31.360 LIB libspdk_accel_ioat.a 00:02:31.360 SO libspdk_accel_ioat.so.6.0 00:02:31.360 SYMLINK libspdk_scheduler_gscheduler.so 00:02:31.360 LIB libspdk_accel_iaa.a 00:02:31.360 SYMLINK libspdk_keyring_linux.so 00:02:31.360 SYMLINK libspdk_keyring_file.so 00:02:31.360 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:31.360 LIB libspdk_accel_error.a 00:02:31.360 SO libspdk_accel_iaa.so.3.0 00:02:31.360 SO libspdk_accel_error.so.2.0 00:02:31.360 SYMLINK libspdk_accel_ioat.so 00:02:31.360 LIB libspdk_blob_bdev.a 00:02:31.360 LIB libspdk_scheduler_dynamic.a 00:02:31.360 SYMLINK libspdk_accel_iaa.so 00:02:31.360 LIB libspdk_accel_dsa.a 00:02:31.360 SO libspdk_blob_bdev.so.11.0 00:02:31.360 SO libspdk_scheduler_dynamic.so.4.0 00:02:31.360 SYMLINK libspdk_accel_error.so 00:02:31.360 SO libspdk_accel_dsa.so.5.0 00:02:31.360 SYMLINK libspdk_blob_bdev.so 00:02:31.360 SYMLINK libspdk_scheduler_dynamic.so 00:02:31.360 SYMLINK libspdk_accel_dsa.so 00:02:31.619 LIB libspdk_vfu_device.a 00:02:31.619 CC module/bdev/malloc/bdev_malloc.o 00:02:31.619 CC module/bdev/gpt/gpt.o 00:02:31.619 CC module/blobfs/bdev/blobfs_bdev.o 00:02:31.619 CC module/bdev/delay/vbdev_delay.o 00:02:31.619 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:31.619 CC module/bdev/gpt/vbdev_gpt.o 00:02:31.619 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:31.619 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:31.619 CC module/bdev/lvol/vbdev_lvol.o 00:02:31.619 CC module/bdev/error/vbdev_error.o 00:02:31.619 SO libspdk_vfu_device.so.3.0 00:02:31.619 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:31.619 CC module/bdev/error/vbdev_error_rpc.o 00:02:31.619 CC module/bdev/null/bdev_null.o 00:02:31.619 CC module/bdev/null/bdev_null_rpc.o 00:02:31.619 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:31.619 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:31.619 CC module/bdev/split/vbdev_split.o 00:02:31.619 CC module/bdev/nvme/bdev_nvme.o 00:02:31.619 CC module/bdev/aio/bdev_aio.o 00:02:31.619 CC module/bdev/split/vbdev_split_rpc.o 00:02:31.619 CC module/bdev/aio/bdev_aio_rpc.o 00:02:31.619 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:31.619 CC module/bdev/ftl/bdev_ftl.o 00:02:31.619 CC module/bdev/raid/bdev_raid.o 00:02:31.619 CC module/bdev/passthru/vbdev_passthru.o 00:02:31.619 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:31.619 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:31.619 CC module/bdev/nvme/nvme_rpc.o 00:02:31.619 CC module/bdev/raid/bdev_raid_rpc.o 00:02:31.619 CC module/bdev/nvme/bdev_mdns_client.o 00:02:31.619 CC module/bdev/raid/bdev_raid_sb.o 00:02:31.619 CC module/bdev/nvme/vbdev_opal.o 00:02:31.619 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:31.619 CC module/bdev/raid/raid0.o 00:02:31.619 CC module/bdev/iscsi/bdev_iscsi.o 00:02:31.619 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:31.619 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:31.619 CC module/bdev/raid/raid1.o 00:02:31.619 CC module/bdev/raid/concat.o 00:02:31.619 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:31.619 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:31.619 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:31.880 SYMLINK libspdk_vfu_device.so 00:02:32.139 LIB libspdk_fsdev_aio.a 00:02:32.139 SO libspdk_fsdev_aio.so.1.0 00:02:32.139 LIB libspdk_blobfs_bdev.a 00:02:32.139 SO libspdk_blobfs_bdev.so.6.0 00:02:32.139 LIB libspdk_bdev_split.a 00:02:32.139 LIB libspdk_bdev_null.a 00:02:32.139 LIB libspdk_sock_posix.a 00:02:32.139 SO libspdk_bdev_split.so.6.0 00:02:32.139 SYMLINK libspdk_fsdev_aio.so 00:02:32.139 SO libspdk_bdev_null.so.6.0 00:02:32.139 SO libspdk_sock_posix.so.6.0 00:02:32.139 SYMLINK libspdk_blobfs_bdev.so 00:02:32.139 LIB libspdk_bdev_gpt.a 00:02:32.139 SYMLINK libspdk_bdev_null.so 00:02:32.139 LIB libspdk_bdev_error.a 00:02:32.139 SO libspdk_bdev_gpt.so.6.0 00:02:32.139 SYMLINK libspdk_bdev_split.so 00:02:32.139 LIB libspdk_bdev_iscsi.a 00:02:32.139 SO libspdk_bdev_error.so.6.0 00:02:32.139 LIB libspdk_bdev_ftl.a 00:02:32.139 SYMLINK libspdk_sock_posix.so 00:02:32.139 LIB libspdk_bdev_passthru.a 00:02:32.139 LIB libspdk_bdev_zone_block.a 00:02:32.139 LIB libspdk_bdev_aio.a 00:02:32.139 SO libspdk_bdev_iscsi.so.6.0 00:02:32.397 SO libspdk_bdev_ftl.so.6.0 00:02:32.397 SYMLINK libspdk_bdev_gpt.so 00:02:32.397 SO libspdk_bdev_aio.so.6.0 00:02:32.397 SO libspdk_bdev_passthru.so.6.0 00:02:32.397 SO libspdk_bdev_zone_block.so.6.0 00:02:32.397 LIB libspdk_bdev_delay.a 00:02:32.397 SYMLINK libspdk_bdev_error.so 00:02:32.397 LIB libspdk_bdev_malloc.a 00:02:32.397 SO libspdk_bdev_delay.so.6.0 00:02:32.397 SYMLINK libspdk_bdev_iscsi.so 00:02:32.397 SYMLINK libspdk_bdev_ftl.so 00:02:32.397 SO libspdk_bdev_malloc.so.6.0 00:02:32.397 SYMLINK libspdk_bdev_passthru.so 00:02:32.397 SYMLINK libspdk_bdev_zone_block.so 00:02:32.397 SYMLINK libspdk_bdev_aio.so 00:02:32.397 SYMLINK libspdk_bdev_delay.so 00:02:32.397 SYMLINK libspdk_bdev_malloc.so 00:02:32.397 LIB libspdk_bdev_lvol.a 00:02:32.397 SO libspdk_bdev_lvol.so.6.0 00:02:32.397 LIB libspdk_bdev_virtio.a 00:02:32.397 SYMLINK libspdk_bdev_lvol.so 00:02:32.397 SO libspdk_bdev_virtio.so.6.0 00:02:32.657 SYMLINK libspdk_bdev_virtio.so 00:02:32.915 LIB libspdk_bdev_raid.a 00:02:32.915 SO libspdk_bdev_raid.so.6.0 00:02:32.915 SYMLINK libspdk_bdev_raid.so 00:02:34.294 LIB libspdk_bdev_nvme.a 00:02:34.294 SO libspdk_bdev_nvme.so.7.1 00:02:34.551 SYMLINK libspdk_bdev_nvme.so 00:02:34.809 CC module/event/subsystems/vmd/vmd.o 00:02:34.809 CC module/event/subsystems/scheduler/scheduler.o 00:02:34.809 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:34.809 CC module/event/subsystems/fsdev/fsdev.o 00:02:34.809 CC module/event/subsystems/keyring/keyring.o 00:02:34.809 CC module/event/subsystems/iobuf/iobuf.o 00:02:34.809 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:34.809 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:34.809 CC module/event/subsystems/sock/sock.o 00:02:34.809 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:35.067 LIB libspdk_event_keyring.a 00:02:35.067 LIB libspdk_event_vhost_blk.a 00:02:35.067 LIB libspdk_event_fsdev.a 00:02:35.067 LIB libspdk_event_scheduler.a 00:02:35.067 LIB libspdk_event_vmd.a 00:02:35.067 LIB libspdk_event_vfu_tgt.a 00:02:35.067 LIB libspdk_event_sock.a 00:02:35.067 SO libspdk_event_keyring.so.1.0 00:02:35.067 SO libspdk_event_fsdev.so.1.0 00:02:35.067 SO libspdk_event_vhost_blk.so.3.0 00:02:35.067 LIB libspdk_event_iobuf.a 00:02:35.067 SO libspdk_event_scheduler.so.4.0 00:02:35.067 SO libspdk_event_vfu_tgt.so.3.0 00:02:35.067 SO libspdk_event_sock.so.5.0 00:02:35.067 SO libspdk_event_vmd.so.6.0 00:02:35.067 SO libspdk_event_iobuf.so.3.0 00:02:35.067 SYMLINK libspdk_event_keyring.so 00:02:35.067 SYMLINK libspdk_event_fsdev.so 00:02:35.067 SYMLINK libspdk_event_vhost_blk.so 00:02:35.067 SYMLINK libspdk_event_scheduler.so 00:02:35.067 SYMLINK libspdk_event_vfu_tgt.so 00:02:35.067 SYMLINK libspdk_event_sock.so 00:02:35.067 SYMLINK libspdk_event_vmd.so 00:02:35.067 SYMLINK libspdk_event_iobuf.so 00:02:35.325 CC module/event/subsystems/accel/accel.o 00:02:35.325 LIB libspdk_event_accel.a 00:02:35.325 SO libspdk_event_accel.so.6.0 00:02:35.583 SYMLINK libspdk_event_accel.so 00:02:35.583 CC module/event/subsystems/bdev/bdev.o 00:02:35.841 LIB libspdk_event_bdev.a 00:02:35.841 SO libspdk_event_bdev.so.6.0 00:02:35.841 SYMLINK libspdk_event_bdev.so 00:02:36.098 CC module/event/subsystems/nbd/nbd.o 00:02:36.098 CC module/event/subsystems/scsi/scsi.o 00:02:36.098 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:36.099 CC module/event/subsystems/ublk/ublk.o 00:02:36.099 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:36.357 LIB libspdk_event_nbd.a 00:02:36.357 LIB libspdk_event_ublk.a 00:02:36.357 LIB libspdk_event_scsi.a 00:02:36.357 SO libspdk_event_nbd.so.6.0 00:02:36.357 SO libspdk_event_ublk.so.3.0 00:02:36.357 SO libspdk_event_scsi.so.6.0 00:02:36.357 SYMLINK libspdk_event_ublk.so 00:02:36.357 SYMLINK libspdk_event_nbd.so 00:02:36.357 SYMLINK libspdk_event_scsi.so 00:02:36.357 LIB libspdk_event_nvmf.a 00:02:36.357 SO libspdk_event_nvmf.so.6.0 00:02:36.357 SYMLINK libspdk_event_nvmf.so 00:02:36.615 CC module/event/subsystems/iscsi/iscsi.o 00:02:36.615 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:36.615 LIB libspdk_event_vhost_scsi.a 00:02:36.615 LIB libspdk_event_iscsi.a 00:02:36.615 SO libspdk_event_vhost_scsi.so.3.0 00:02:36.615 SO libspdk_event_iscsi.so.6.0 00:02:36.615 SYMLINK libspdk_event_vhost_scsi.so 00:02:36.615 SYMLINK libspdk_event_iscsi.so 00:02:36.874 SO libspdk.so.6.0 00:02:36.874 SYMLINK libspdk.so 00:02:37.139 CC app/trace_record/trace_record.o 00:02:37.139 CC test/rpc_client/rpc_client_test.o 00:02:37.139 TEST_HEADER include/spdk/accel.h 00:02:37.139 TEST_HEADER include/spdk/accel_module.h 00:02:37.139 CC app/spdk_nvme_identify/identify.o 00:02:37.139 TEST_HEADER include/spdk/assert.h 00:02:37.139 CC app/spdk_lspci/spdk_lspci.o 00:02:37.139 TEST_HEADER include/spdk/barrier.h 00:02:37.139 CC app/spdk_nvme_discover/discovery_aer.o 00:02:37.139 TEST_HEADER include/spdk/base64.h 00:02:37.139 CC app/spdk_top/spdk_top.o 00:02:37.139 TEST_HEADER include/spdk/bdev_module.h 00:02:37.139 TEST_HEADER include/spdk/bdev.h 00:02:37.139 TEST_HEADER include/spdk/bdev_zone.h 00:02:37.139 CXX app/trace/trace.o 00:02:37.139 TEST_HEADER include/spdk/bit_array.h 00:02:37.139 TEST_HEADER include/spdk/bit_pool.h 00:02:37.139 TEST_HEADER include/spdk/blob_bdev.h 00:02:37.139 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:37.139 TEST_HEADER include/spdk/blob.h 00:02:37.139 TEST_HEADER include/spdk/blobfs.h 00:02:37.139 CC app/spdk_nvme_perf/perf.o 00:02:37.139 TEST_HEADER include/spdk/conf.h 00:02:37.139 TEST_HEADER include/spdk/config.h 00:02:37.139 TEST_HEADER include/spdk/cpuset.h 00:02:37.139 TEST_HEADER include/spdk/crc16.h 00:02:37.139 TEST_HEADER include/spdk/crc32.h 00:02:37.139 TEST_HEADER include/spdk/crc64.h 00:02:37.139 TEST_HEADER include/spdk/dif.h 00:02:37.139 TEST_HEADER include/spdk/dma.h 00:02:37.139 TEST_HEADER include/spdk/endian.h 00:02:37.139 TEST_HEADER include/spdk/env_dpdk.h 00:02:37.139 TEST_HEADER include/spdk/env.h 00:02:37.139 TEST_HEADER include/spdk/event.h 00:02:37.139 TEST_HEADER include/spdk/fd_group.h 00:02:37.139 TEST_HEADER include/spdk/fd.h 00:02:37.139 TEST_HEADER include/spdk/file.h 00:02:37.139 TEST_HEADER include/spdk/fsdev.h 00:02:37.139 TEST_HEADER include/spdk/fsdev_module.h 00:02:37.139 TEST_HEADER include/spdk/ftl.h 00:02:37.139 TEST_HEADER include/spdk/gpt_spec.h 00:02:37.139 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:37.139 TEST_HEADER include/spdk/hexlify.h 00:02:37.139 TEST_HEADER include/spdk/histogram_data.h 00:02:37.139 TEST_HEADER include/spdk/idxd.h 00:02:37.139 TEST_HEADER include/spdk/idxd_spec.h 00:02:37.139 TEST_HEADER include/spdk/init.h 00:02:37.139 TEST_HEADER include/spdk/ioat.h 00:02:37.139 TEST_HEADER include/spdk/ioat_spec.h 00:02:37.139 TEST_HEADER include/spdk/iscsi_spec.h 00:02:37.139 TEST_HEADER include/spdk/json.h 00:02:37.139 TEST_HEADER include/spdk/jsonrpc.h 00:02:37.139 TEST_HEADER include/spdk/keyring.h 00:02:37.139 TEST_HEADER include/spdk/keyring_module.h 00:02:37.139 TEST_HEADER include/spdk/likely.h 00:02:37.139 TEST_HEADER include/spdk/log.h 00:02:37.139 TEST_HEADER include/spdk/lvol.h 00:02:37.139 TEST_HEADER include/spdk/md5.h 00:02:37.139 TEST_HEADER include/spdk/memory.h 00:02:37.139 TEST_HEADER include/spdk/nbd.h 00:02:37.139 TEST_HEADER include/spdk/mmio.h 00:02:37.139 TEST_HEADER include/spdk/net.h 00:02:37.139 TEST_HEADER include/spdk/notify.h 00:02:37.139 TEST_HEADER include/spdk/nvme.h 00:02:37.139 TEST_HEADER include/spdk/nvme_intel.h 00:02:37.139 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:37.139 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:37.139 TEST_HEADER include/spdk/nvme_spec.h 00:02:37.139 TEST_HEADER include/spdk/nvme_zns.h 00:02:37.139 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:37.139 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:37.139 TEST_HEADER include/spdk/nvmf.h 00:02:37.139 TEST_HEADER include/spdk/nvmf_spec.h 00:02:37.139 TEST_HEADER include/spdk/nvmf_transport.h 00:02:37.139 TEST_HEADER include/spdk/opal.h 00:02:37.139 TEST_HEADER include/spdk/opal_spec.h 00:02:37.139 TEST_HEADER include/spdk/pci_ids.h 00:02:37.139 TEST_HEADER include/spdk/queue.h 00:02:37.139 TEST_HEADER include/spdk/pipe.h 00:02:37.139 TEST_HEADER include/spdk/reduce.h 00:02:37.139 TEST_HEADER include/spdk/rpc.h 00:02:37.139 TEST_HEADER include/spdk/scheduler.h 00:02:37.139 TEST_HEADER include/spdk/scsi.h 00:02:37.139 TEST_HEADER include/spdk/scsi_spec.h 00:02:37.139 TEST_HEADER include/spdk/sock.h 00:02:37.139 TEST_HEADER include/spdk/stdinc.h 00:02:37.139 TEST_HEADER include/spdk/string.h 00:02:37.139 TEST_HEADER include/spdk/thread.h 00:02:37.139 TEST_HEADER include/spdk/trace.h 00:02:37.139 TEST_HEADER include/spdk/trace_parser.h 00:02:37.139 TEST_HEADER include/spdk/tree.h 00:02:37.139 TEST_HEADER include/spdk/ublk.h 00:02:37.139 TEST_HEADER include/spdk/util.h 00:02:37.139 TEST_HEADER include/spdk/version.h 00:02:37.139 TEST_HEADER include/spdk/uuid.h 00:02:37.139 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:37.139 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:37.139 TEST_HEADER include/spdk/vhost.h 00:02:37.139 TEST_HEADER include/spdk/vmd.h 00:02:37.139 TEST_HEADER include/spdk/xor.h 00:02:37.139 TEST_HEADER include/spdk/zipf.h 00:02:37.139 CXX test/cpp_headers/accel.o 00:02:37.139 CXX test/cpp_headers/accel_module.o 00:02:37.139 CXX test/cpp_headers/assert.o 00:02:37.139 CXX test/cpp_headers/barrier.o 00:02:37.139 CXX test/cpp_headers/base64.o 00:02:37.139 CXX test/cpp_headers/bdev.o 00:02:37.139 CXX test/cpp_headers/bdev_module.o 00:02:37.139 CXX test/cpp_headers/bdev_zone.o 00:02:37.139 CXX test/cpp_headers/bit_array.o 00:02:37.139 CXX test/cpp_headers/bit_pool.o 00:02:37.139 CC app/spdk_dd/spdk_dd.o 00:02:37.139 CXX test/cpp_headers/blob_bdev.o 00:02:37.139 CXX test/cpp_headers/blobfs_bdev.o 00:02:37.139 CXX test/cpp_headers/blobfs.o 00:02:37.139 CXX test/cpp_headers/blob.o 00:02:37.139 CXX test/cpp_headers/conf.o 00:02:37.139 CXX test/cpp_headers/config.o 00:02:37.139 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:37.139 CC app/nvmf_tgt/nvmf_main.o 00:02:37.139 CXX test/cpp_headers/cpuset.o 00:02:37.139 CXX test/cpp_headers/crc16.o 00:02:37.139 CC app/iscsi_tgt/iscsi_tgt.o 00:02:37.139 CXX test/cpp_headers/crc32.o 00:02:37.140 CC app/spdk_tgt/spdk_tgt.o 00:02:37.140 CC test/app/jsoncat/jsoncat.o 00:02:37.140 CC test/env/memory/memory_ut.o 00:02:37.140 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:37.140 CC test/app/histogram_perf/histogram_perf.o 00:02:37.140 CC test/thread/poller_perf/poller_perf.o 00:02:37.140 CC test/env/pci/pci_ut.o 00:02:37.140 CC test/app/stub/stub.o 00:02:37.140 CC examples/ioat/perf/perf.o 00:02:37.140 CC test/env/vtophys/vtophys.o 00:02:37.140 CC app/fio/nvme/fio_plugin.o 00:02:37.140 CC examples/ioat/verify/verify.o 00:02:37.140 CC examples/util/zipf/zipf.o 00:02:37.140 CC test/dma/test_dma/test_dma.o 00:02:37.140 CC test/app/bdev_svc/bdev_svc.o 00:02:37.399 CC app/fio/bdev/fio_plugin.o 00:02:37.399 LINK spdk_lspci 00:02:37.399 CC test/env/mem_callbacks/mem_callbacks.o 00:02:37.399 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:37.399 LINK spdk_nvme_discover 00:02:37.399 LINK rpc_client_test 00:02:37.399 LINK jsoncat 00:02:37.399 LINK vtophys 00:02:37.666 CXX test/cpp_headers/crc64.o 00:02:37.666 LINK poller_perf 00:02:37.666 LINK env_dpdk_post_init 00:02:37.666 CXX test/cpp_headers/dif.o 00:02:37.666 LINK zipf 00:02:37.666 CXX test/cpp_headers/dma.o 00:02:37.666 LINK nvmf_tgt 00:02:37.666 CXX test/cpp_headers/endian.o 00:02:37.666 LINK histogram_perf 00:02:37.666 CXX test/cpp_headers/env_dpdk.o 00:02:37.666 CXX test/cpp_headers/env.o 00:02:37.666 CXX test/cpp_headers/event.o 00:02:37.666 LINK interrupt_tgt 00:02:37.666 LINK spdk_trace_record 00:02:37.667 CXX test/cpp_headers/fd_group.o 00:02:37.667 CXX test/cpp_headers/fd.o 00:02:37.667 CXX test/cpp_headers/file.o 00:02:37.667 CXX test/cpp_headers/fsdev.o 00:02:37.667 LINK stub 00:02:37.667 CXX test/cpp_headers/fsdev_module.o 00:02:37.667 LINK iscsi_tgt 00:02:37.667 CXX test/cpp_headers/ftl.o 00:02:37.667 CXX test/cpp_headers/fuse_dispatcher.o 00:02:37.667 CXX test/cpp_headers/gpt_spec.o 00:02:37.667 LINK spdk_tgt 00:02:37.667 LINK ioat_perf 00:02:37.667 LINK bdev_svc 00:02:37.667 LINK verify 00:02:37.667 CXX test/cpp_headers/hexlify.o 00:02:37.667 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:37.667 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:37.667 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:37.667 CXX test/cpp_headers/histogram_data.o 00:02:37.926 CXX test/cpp_headers/idxd_spec.o 00:02:37.926 CXX test/cpp_headers/idxd.o 00:02:37.926 CXX test/cpp_headers/init.o 00:02:37.926 CXX test/cpp_headers/ioat.o 00:02:37.926 CXX test/cpp_headers/ioat_spec.o 00:02:37.926 CXX test/cpp_headers/iscsi_spec.o 00:02:37.926 CXX test/cpp_headers/json.o 00:02:37.926 LINK spdk_dd 00:02:37.926 CXX test/cpp_headers/jsonrpc.o 00:02:37.926 CXX test/cpp_headers/keyring.o 00:02:37.926 CXX test/cpp_headers/keyring_module.o 00:02:37.926 CXX test/cpp_headers/likely.o 00:02:37.927 CXX test/cpp_headers/log.o 00:02:37.927 CXX test/cpp_headers/lvol.o 00:02:37.927 LINK spdk_trace 00:02:37.927 CXX test/cpp_headers/md5.o 00:02:37.927 CXX test/cpp_headers/memory.o 00:02:37.927 CXX test/cpp_headers/mmio.o 00:02:37.927 CXX test/cpp_headers/nbd.o 00:02:37.927 CXX test/cpp_headers/net.o 00:02:37.927 LINK pci_ut 00:02:37.927 CXX test/cpp_headers/notify.o 00:02:37.927 CXX test/cpp_headers/nvme.o 00:02:37.927 CXX test/cpp_headers/nvme_intel.o 00:02:38.190 CXX test/cpp_headers/nvme_ocssd.o 00:02:38.190 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:38.190 CXX test/cpp_headers/nvme_spec.o 00:02:38.190 CXX test/cpp_headers/nvme_zns.o 00:02:38.190 CXX test/cpp_headers/nvmf_cmd.o 00:02:38.190 CC test/event/event_perf/event_perf.o 00:02:38.190 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:38.190 CC test/event/reactor/reactor.o 00:02:38.190 CC test/event/reactor_perf/reactor_perf.o 00:02:38.190 CXX test/cpp_headers/nvmf.o 00:02:38.190 CXX test/cpp_headers/nvmf_spec.o 00:02:38.190 CXX test/cpp_headers/nvmf_transport.o 00:02:38.190 CC test/event/app_repeat/app_repeat.o 00:02:38.190 LINK nvme_fuzz 00:02:38.190 LINK spdk_nvme 00:02:38.190 CXX test/cpp_headers/opal.o 00:02:38.190 CXX test/cpp_headers/opal_spec.o 00:02:38.190 CC test/event/scheduler/scheduler.o 00:02:38.190 CC examples/sock/hello_world/hello_sock.o 00:02:38.190 CXX test/cpp_headers/pci_ids.o 00:02:38.190 CC examples/vmd/lsvmd/lsvmd.o 00:02:38.190 CXX test/cpp_headers/pipe.o 00:02:38.459 CC examples/thread/thread/thread_ex.o 00:02:38.459 LINK test_dma 00:02:38.459 CC examples/idxd/perf/perf.o 00:02:38.459 LINK spdk_bdev 00:02:38.459 CXX test/cpp_headers/queue.o 00:02:38.459 CXX test/cpp_headers/reduce.o 00:02:38.459 CXX test/cpp_headers/rpc.o 00:02:38.459 CC examples/vmd/led/led.o 00:02:38.459 CXX test/cpp_headers/scheduler.o 00:02:38.459 CXX test/cpp_headers/scsi.o 00:02:38.459 CXX test/cpp_headers/scsi_spec.o 00:02:38.459 CXX test/cpp_headers/sock.o 00:02:38.459 CXX test/cpp_headers/stdinc.o 00:02:38.459 CXX test/cpp_headers/string.o 00:02:38.459 CXX test/cpp_headers/thread.o 00:02:38.459 CXX test/cpp_headers/trace.o 00:02:38.459 CXX test/cpp_headers/trace_parser.o 00:02:38.459 CXX test/cpp_headers/tree.o 00:02:38.459 LINK event_perf 00:02:38.459 CXX test/cpp_headers/ublk.o 00:02:38.459 CXX test/cpp_headers/util.o 00:02:38.459 LINK reactor 00:02:38.459 CXX test/cpp_headers/uuid.o 00:02:38.459 CXX test/cpp_headers/version.o 00:02:38.459 CXX test/cpp_headers/vfio_user_pci.o 00:02:38.459 LINK vhost_fuzz 00:02:38.459 LINK reactor_perf 00:02:38.459 CXX test/cpp_headers/vfio_user_spec.o 00:02:38.459 CXX test/cpp_headers/vhost.o 00:02:38.717 LINK mem_callbacks 00:02:38.717 CXX test/cpp_headers/xor.o 00:02:38.717 CXX test/cpp_headers/vmd.o 00:02:38.717 CXX test/cpp_headers/zipf.o 00:02:38.717 LINK app_repeat 00:02:38.717 CC app/vhost/vhost.o 00:02:38.717 LINK lsvmd 00:02:38.717 LINK spdk_nvme_perf 00:02:38.717 LINK spdk_nvme_identify 00:02:38.717 LINK led 00:02:38.717 LINK spdk_top 00:02:38.717 LINK scheduler 00:02:38.717 LINK hello_sock 00:02:38.717 LINK thread 00:02:38.975 LINK vhost 00:02:38.975 CC test/nvme/simple_copy/simple_copy.o 00:02:38.975 CC test/nvme/overhead/overhead.o 00:02:38.975 CC test/nvme/aer/aer.o 00:02:38.975 CC test/nvme/e2edp/nvme_dp.o 00:02:38.975 CC test/nvme/fused_ordering/fused_ordering.o 00:02:38.975 CC test/nvme/sgl/sgl.o 00:02:38.975 CC test/nvme/boot_partition/boot_partition.o 00:02:38.975 CC test/nvme/reset/reset.o 00:02:38.975 CC test/nvme/fdp/fdp.o 00:02:38.975 CC test/nvme/err_injection/err_injection.o 00:02:38.975 CC test/nvme/compliance/nvme_compliance.o 00:02:38.975 CC test/nvme/startup/startup.o 00:02:38.975 CC test/nvme/reserve/reserve.o 00:02:38.975 CC test/nvme/connect_stress/connect_stress.o 00:02:38.975 CC test/nvme/cuse/cuse.o 00:02:38.975 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:38.975 LINK idxd_perf 00:02:38.975 CC test/blobfs/mkfs/mkfs.o 00:02:38.975 CC test/accel/dif/dif.o 00:02:38.975 CC test/lvol/esnap/esnap.o 00:02:39.233 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:39.233 CC examples/nvme/hotplug/hotplug.o 00:02:39.233 CC examples/nvme/hello_world/hello_world.o 00:02:39.233 CC examples/nvme/reconnect/reconnect.o 00:02:39.233 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:39.233 CC examples/nvme/abort/abort.o 00:02:39.233 CC examples/nvme/arbitration/arbitration.o 00:02:39.233 LINK err_injection 00:02:39.233 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:39.233 LINK connect_stress 00:02:39.233 LINK fused_ordering 00:02:39.233 LINK doorbell_aers 00:02:39.233 LINK reserve 00:02:39.233 CC examples/accel/perf/accel_perf.o 00:02:39.233 LINK boot_partition 00:02:39.233 LINK simple_copy 00:02:39.233 LINK memory_ut 00:02:39.233 LINK mkfs 00:02:39.233 LINK startup 00:02:39.233 LINK aer 00:02:39.491 LINK reset 00:02:39.491 LINK sgl 00:02:39.491 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:39.491 CC examples/blob/hello_world/hello_blob.o 00:02:39.491 CC examples/blob/cli/blobcli.o 00:02:39.491 LINK overhead 00:02:39.491 LINK nvme_dp 00:02:39.491 LINK nvme_compliance 00:02:39.491 LINK fdp 00:02:39.491 LINK cmb_copy 00:02:39.491 LINK pmr_persistence 00:02:39.749 LINK hotplug 00:02:39.749 LINK arbitration 00:02:39.749 LINK hello_world 00:02:39.749 LINK abort 00:02:39.749 LINK hello_blob 00:02:39.749 LINK hello_fsdev 00:02:39.749 LINK reconnect 00:02:39.749 LINK dif 00:02:40.006 LINK accel_perf 00:02:40.006 LINK blobcli 00:02:40.006 LINK nvme_manage 00:02:40.264 LINK iscsi_fuzz 00:02:40.264 CC test/bdev/bdevio/bdevio.o 00:02:40.264 CC examples/bdev/hello_world/hello_bdev.o 00:02:40.264 CC examples/bdev/bdevperf/bdevperf.o 00:02:40.522 LINK hello_bdev 00:02:40.522 LINK cuse 00:02:40.781 LINK bdevio 00:02:41.039 LINK bdevperf 00:02:41.605 CC examples/nvmf/nvmf/nvmf.o 00:02:41.863 LINK nvmf 00:02:44.397 LINK esnap 00:02:44.397 00:02:44.397 real 1m10.551s 00:02:44.397 user 11m54.513s 00:02:44.397 sys 2m37.574s 00:02:44.397 11:21:24 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:44.397 11:21:24 make -- common/autotest_common.sh@10 -- $ set +x 00:02:44.397 ************************************ 00:02:44.397 END TEST make 00:02:44.397 ************************************ 00:02:44.397 11:21:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:44.397 11:21:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:44.397 11:21:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:44.397 11:21:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.397 11:21:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:44.397 11:21:24 -- pm/common@44 -- $ pid=2726727 00:02:44.397 11:21:24 -- pm/common@50 -- $ kill -TERM 2726727 00:02:44.397 11:21:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.397 11:21:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:44.397 11:21:24 -- pm/common@44 -- $ pid=2726728 00:02:44.397 11:21:24 -- pm/common@50 -- $ kill -TERM 2726728 00:02:44.397 11:21:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.397 11:21:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:44.397 11:21:24 -- pm/common@44 -- $ pid=2726731 00:02:44.397 11:21:24 -- pm/common@50 -- $ kill -TERM 2726731 00:02:44.397 11:21:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.397 11:21:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:44.397 11:21:24 -- pm/common@44 -- $ pid=2726761 00:02:44.397 11:21:24 -- pm/common@50 -- $ sudo -E kill -TERM 2726761 00:02:44.655 11:21:24 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:44.655 11:21:24 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:44.655 11:21:24 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:44.655 11:21:24 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:44.655 11:21:24 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:44.655 11:21:24 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:44.655 11:21:24 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:44.655 11:21:24 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:44.655 11:21:24 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:44.655 11:21:24 -- scripts/common.sh@336 -- # IFS=.-: 00:02:44.655 11:21:24 -- scripts/common.sh@336 -- # read -ra ver1 00:02:44.655 11:21:24 -- scripts/common.sh@337 -- # IFS=.-: 00:02:44.655 11:21:24 -- scripts/common.sh@337 -- # read -ra ver2 00:02:44.655 11:21:24 -- scripts/common.sh@338 -- # local 'op=<' 00:02:44.655 11:21:24 -- scripts/common.sh@340 -- # ver1_l=2 00:02:44.655 11:21:24 -- scripts/common.sh@341 -- # ver2_l=1 00:02:44.655 11:21:24 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:44.655 11:21:24 -- scripts/common.sh@344 -- # case "$op" in 00:02:44.655 11:21:24 -- scripts/common.sh@345 -- # : 1 00:02:44.655 11:21:24 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:44.655 11:21:24 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:44.655 11:21:24 -- scripts/common.sh@365 -- # decimal 1 00:02:44.655 11:21:24 -- scripts/common.sh@353 -- # local d=1 00:02:44.655 11:21:24 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:44.655 11:21:24 -- scripts/common.sh@355 -- # echo 1 00:02:44.655 11:21:24 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:44.655 11:21:24 -- scripts/common.sh@366 -- # decimal 2 00:02:44.655 11:21:24 -- scripts/common.sh@353 -- # local d=2 00:02:44.655 11:21:24 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:44.655 11:21:24 -- scripts/common.sh@355 -- # echo 2 00:02:44.655 11:21:24 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:44.655 11:21:24 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:44.655 11:21:24 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:44.655 11:21:24 -- scripts/common.sh@368 -- # return 0 00:02:44.655 11:21:24 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:44.655 11:21:24 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:44.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.655 --rc genhtml_branch_coverage=1 00:02:44.655 --rc genhtml_function_coverage=1 00:02:44.655 --rc genhtml_legend=1 00:02:44.655 --rc geninfo_all_blocks=1 00:02:44.655 --rc geninfo_unexecuted_blocks=1 00:02:44.655 00:02:44.655 ' 00:02:44.655 11:21:24 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:44.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.655 --rc genhtml_branch_coverage=1 00:02:44.655 --rc genhtml_function_coverage=1 00:02:44.655 --rc genhtml_legend=1 00:02:44.655 --rc geninfo_all_blocks=1 00:02:44.655 --rc geninfo_unexecuted_blocks=1 00:02:44.655 00:02:44.655 ' 00:02:44.655 11:21:24 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:44.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.655 --rc genhtml_branch_coverage=1 00:02:44.655 --rc genhtml_function_coverage=1 00:02:44.655 --rc genhtml_legend=1 00:02:44.655 --rc geninfo_all_blocks=1 00:02:44.656 --rc geninfo_unexecuted_blocks=1 00:02:44.656 00:02:44.656 ' 00:02:44.656 11:21:24 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:44.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.656 --rc genhtml_branch_coverage=1 00:02:44.656 --rc genhtml_function_coverage=1 00:02:44.656 --rc genhtml_legend=1 00:02:44.656 --rc geninfo_all_blocks=1 00:02:44.656 --rc geninfo_unexecuted_blocks=1 00:02:44.656 00:02:44.656 ' 00:02:44.656 11:21:24 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:44.656 11:21:24 -- nvmf/common.sh@7 -- # uname -s 00:02:44.656 11:21:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:44.656 11:21:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:44.656 11:21:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:44.656 11:21:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:44.656 11:21:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:44.656 11:21:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:44.656 11:21:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:44.656 11:21:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:44.656 11:21:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:44.656 11:21:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:44.656 11:21:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:02:44.656 11:21:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:02:44.656 11:21:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:44.656 11:21:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:44.656 11:21:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:44.656 11:21:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:44.656 11:21:24 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:44.656 11:21:24 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:44.656 11:21:24 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:44.656 11:21:24 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:44.656 11:21:24 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:44.656 11:21:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.656 11:21:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.656 11:21:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.656 11:21:24 -- paths/export.sh@5 -- # export PATH 00:02:44.656 11:21:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.656 11:21:24 -- nvmf/common.sh@51 -- # : 0 00:02:44.656 11:21:24 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:44.656 11:21:24 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:44.656 11:21:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:44.656 11:21:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:44.656 11:21:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:44.656 11:21:24 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:44.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:44.656 11:21:24 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:44.656 11:21:24 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:44.656 11:21:24 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:44.656 11:21:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:44.656 11:21:24 -- spdk/autotest.sh@32 -- # uname -s 00:02:44.656 11:21:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:44.656 11:21:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:44.656 11:21:24 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:44.656 11:21:24 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:44.656 11:21:24 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:44.656 11:21:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:44.656 11:21:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:44.656 11:21:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:44.656 11:21:24 -- spdk/autotest.sh@48 -- # udevadm_pid=2786793 00:02:44.656 11:21:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:44.656 11:21:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:44.656 11:21:25 -- pm/common@17 -- # local monitor 00:02:44.656 11:21:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.656 11:21:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.656 11:21:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.656 11:21:25 -- pm/common@21 -- # date +%s 00:02:44.656 11:21:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.656 11:21:25 -- pm/common@21 -- # date +%s 00:02:44.656 11:21:25 -- pm/common@25 -- # sleep 1 00:02:44.656 11:21:25 -- pm/common@21 -- # date +%s 00:02:44.656 11:21:25 -- pm/common@21 -- # date +%s 00:02:44.656 11:21:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731666085 00:02:44.656 11:21:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731666085 00:02:44.656 11:21:25 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731666085 00:02:44.656 11:21:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731666085 00:02:44.656 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731666085_collect-cpu-load.pm.log 00:02:44.656 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731666085_collect-vmstat.pm.log 00:02:44.656 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731666085_collect-cpu-temp.pm.log 00:02:44.656 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731666085_collect-bmc-pm.bmc.pm.log 00:02:45.592 11:21:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:45.592 11:21:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:45.592 11:21:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:45.592 11:21:26 -- common/autotest_common.sh@10 -- # set +x 00:02:45.592 11:21:26 -- spdk/autotest.sh@59 -- # create_test_list 00:02:45.592 11:21:26 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:45.592 11:21:26 -- common/autotest_common.sh@10 -- # set +x 00:02:45.851 11:21:26 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:45.851 11:21:26 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:45.851 11:21:26 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:45.851 11:21:26 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:45.851 11:21:26 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:45.851 11:21:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:45.851 11:21:26 -- common/autotest_common.sh@1457 -- # uname 00:02:45.851 11:21:26 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:45.851 11:21:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:45.851 11:21:26 -- common/autotest_common.sh@1477 -- # uname 00:02:45.851 11:21:26 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:45.851 11:21:26 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:45.851 11:21:26 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:45.851 lcov: LCOV version 1.15 00:02:45.851 11:21:26 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:03.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:03.951 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:25.895 11:22:03 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:25.895 11:22:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:25.895 11:22:03 -- common/autotest_common.sh@10 -- # set +x 00:03:25.895 11:22:03 -- spdk/autotest.sh@78 -- # rm -f 00:03:25.895 11:22:03 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.895 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:25.895 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:25.895 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:25.895 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:25.895 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:25.895 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:25.895 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:25.895 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:25.895 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:03:25.895 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:25.895 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:25.895 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:25.895 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:25.895 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:25.895 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:25.895 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:25.895 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:25.895 11:22:04 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:25.895 11:22:04 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:25.895 11:22:04 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:25.895 11:22:04 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:25.895 11:22:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:25.895 11:22:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:25.895 11:22:04 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:25.895 11:22:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:25.895 11:22:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:25.895 11:22:04 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:25.895 11:22:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:25.895 11:22:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:25.895 11:22:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:25.895 11:22:04 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:25.895 11:22:04 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:25.895 No valid GPT data, bailing 00:03:25.895 11:22:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:25.895 11:22:04 -- scripts/common.sh@394 -- # pt= 00:03:25.895 11:22:04 -- scripts/common.sh@395 -- # return 1 00:03:25.895 11:22:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:25.895 1+0 records in 00:03:25.895 1+0 records out 00:03:25.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00143613 s, 730 MB/s 00:03:25.895 11:22:04 -- spdk/autotest.sh@105 -- # sync 00:03:25.895 11:22:04 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:25.895 11:22:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:25.895 11:22:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:26.461 11:22:06 -- spdk/autotest.sh@111 -- # uname -s 00:03:26.461 11:22:06 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:26.461 11:22:06 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:26.461 11:22:06 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:27.864 Hugepages 00:03:27.864 node hugesize free / total 00:03:27.864 node0 1048576kB 0 / 0 00:03:27.864 node0 2048kB 0 / 0 00:03:27.864 node1 1048576kB 0 / 0 00:03:27.864 node1 2048kB 0 / 0 00:03:27.864 00:03:27.864 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:27.864 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:27.864 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:27.864 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:27.864 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:27.864 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:27.864 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:27.864 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:27.864 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:27.864 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:27.864 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:27.864 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:27.864 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:27.864 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:27.864 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:27.864 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:27.864 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:27.864 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:27.864 11:22:08 -- spdk/autotest.sh@117 -- # uname -s 00:03:27.864 11:22:08 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:27.864 11:22:08 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:27.864 11:22:08 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.243 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:29.243 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:29.243 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:29.243 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:29.243 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:29.243 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:29.243 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:29.243 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:29.243 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:29.243 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:29.243 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:29.243 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:29.243 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:29.243 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:29.243 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:29.243 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:30.183 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:30.183 11:22:10 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:31.560 11:22:11 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:31.560 11:22:11 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:31.560 11:22:11 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:31.560 11:22:11 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:31.560 11:22:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:31.560 11:22:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:31.560 11:22:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:31.560 11:22:11 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:31.560 11:22:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:31.560 11:22:11 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:31.560 11:22:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:03:31.560 11:22:11 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.498 Waiting for block devices as requested 00:03:32.498 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:32.757 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:32.757 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:32.757 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:32.757 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:33.017 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:33.017 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:33.017 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:33.276 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:03:33.276 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:33.276 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:33.535 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:33.535 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:33.535 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:33.792 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:33.792 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:33.792 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:34.051 11:22:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:34.051 11:22:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:03:34.051 11:22:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:34.051 11:22:14 -- common/autotest_common.sh@1487 -- # grep 0000:0b:00.0/nvme/nvme 00:03:34.051 11:22:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:34.051 11:22:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:03:34.051 11:22:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:34.051 11:22:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:34.051 11:22:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:34.051 11:22:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:34.051 11:22:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:34.051 11:22:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:34.051 11:22:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:34.051 11:22:14 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:34.051 11:22:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:34.051 11:22:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:34.051 11:22:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:34.051 11:22:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:34.051 11:22:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:34.051 11:22:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:34.051 11:22:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:34.051 11:22:14 -- common/autotest_common.sh@1543 -- # continue 00:03:34.051 11:22:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:34.051 11:22:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:34.051 11:22:14 -- common/autotest_common.sh@10 -- # set +x 00:03:34.051 11:22:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:34.051 11:22:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:34.051 11:22:14 -- common/autotest_common.sh@10 -- # set +x 00:03:34.051 11:22:14 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.428 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:35.428 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:35.428 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:35.428 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:35.428 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:35.428 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:35.428 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:35.428 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:35.428 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:35.428 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:35.428 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:35.428 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:35.428 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:35.428 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:35.428 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:35.428 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:36.365 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:36.365 11:22:16 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:36.365 11:22:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:36.365 11:22:16 -- common/autotest_common.sh@10 -- # set +x 00:03:36.365 11:22:16 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:36.365 11:22:16 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:36.365 11:22:16 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:36.365 11:22:16 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:36.365 11:22:16 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:36.365 11:22:16 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:36.365 11:22:16 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:36.365 11:22:16 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:36.365 11:22:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:36.365 11:22:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:36.365 11:22:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:36.365 11:22:16 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:36.365 11:22:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:36.624 11:22:16 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:36.624 11:22:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:03:36.624 11:22:16 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:36.624 11:22:16 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:03:36.624 11:22:16 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:36.624 11:22:16 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:36.624 11:22:16 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:36.624 11:22:16 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:36.624 11:22:16 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:0b:00.0 00:03:36.625 11:22:16 -- common/autotest_common.sh@1579 -- # [[ -z 0000:0b:00.0 ]] 00:03:36.625 11:22:16 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2797320 00:03:36.625 11:22:16 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:36.625 11:22:16 -- common/autotest_common.sh@1585 -- # waitforlisten 2797320 00:03:36.625 11:22:16 -- common/autotest_common.sh@835 -- # '[' -z 2797320 ']' 00:03:36.625 11:22:16 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:36.625 11:22:16 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:36.625 11:22:16 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:36.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:36.625 11:22:16 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:36.625 11:22:16 -- common/autotest_common.sh@10 -- # set +x 00:03:36.625 [2024-11-15 11:22:16.901977] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:03:36.625 [2024-11-15 11:22:16.902094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2797320 ] 00:03:36.625 [2024-11-15 11:22:16.966572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.625 [2024-11-15 11:22:17.020520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.884 11:22:17 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:36.884 11:22:17 -- common/autotest_common.sh@868 -- # return 0 00:03:36.884 11:22:17 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:36.884 11:22:17 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:36.884 11:22:17 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:03:40.171 nvme0n1 00:03:40.171 11:22:20 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:40.429 [2024-11-15 11:22:20.623725] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:40.429 [2024-11-15 11:22:20.623767] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:40.429 request: 00:03:40.429 { 00:03:40.429 "nvme_ctrlr_name": "nvme0", 00:03:40.429 "password": "test", 00:03:40.429 "method": "bdev_nvme_opal_revert", 00:03:40.429 "req_id": 1 00:03:40.429 } 00:03:40.429 Got JSON-RPC error response 00:03:40.429 response: 00:03:40.429 { 00:03:40.429 "code": -32603, 00:03:40.429 "message": "Internal error" 00:03:40.429 } 00:03:40.429 11:22:20 -- common/autotest_common.sh@1591 -- # true 00:03:40.429 11:22:20 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:40.429 11:22:20 -- common/autotest_common.sh@1595 -- # killprocess 2797320 00:03:40.429 11:22:20 -- common/autotest_common.sh@954 -- # '[' -z 2797320 ']' 00:03:40.429 11:22:20 -- common/autotest_common.sh@958 -- # kill -0 2797320 00:03:40.429 11:22:20 -- common/autotest_common.sh@959 -- # uname 00:03:40.429 11:22:20 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:40.429 11:22:20 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2797320 00:03:40.429 11:22:20 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:40.429 11:22:20 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:40.429 11:22:20 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2797320' 00:03:40.429 killing process with pid 2797320 00:03:40.429 11:22:20 -- common/autotest_common.sh@973 -- # kill 2797320 00:03:40.429 11:22:20 -- common/autotest_common.sh@978 -- # wait 2797320 00:03:42.329 11:22:22 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:42.329 11:22:22 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:42.329 11:22:22 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:42.329 11:22:22 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:42.329 11:22:22 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:42.329 11:22:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:42.329 11:22:22 -- common/autotest_common.sh@10 -- # set +x 00:03:42.329 11:22:22 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:42.329 11:22:22 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:42.329 11:22:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.329 11:22:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.329 11:22:22 -- common/autotest_common.sh@10 -- # set +x 00:03:42.329 ************************************ 00:03:42.329 START TEST env 00:03:42.329 ************************************ 00:03:42.329 11:22:22 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:42.329 * Looking for test storage... 00:03:42.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:42.329 11:22:22 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:42.329 11:22:22 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:42.329 11:22:22 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:42.329 11:22:22 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:42.329 11:22:22 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.329 11:22:22 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.329 11:22:22 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.329 11:22:22 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.329 11:22:22 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.329 11:22:22 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.329 11:22:22 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.329 11:22:22 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.330 11:22:22 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.330 11:22:22 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.330 11:22:22 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.330 11:22:22 env -- scripts/common.sh@344 -- # case "$op" in 00:03:42.330 11:22:22 env -- scripts/common.sh@345 -- # : 1 00:03:42.330 11:22:22 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.330 11:22:22 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.330 11:22:22 env -- scripts/common.sh@365 -- # decimal 1 00:03:42.330 11:22:22 env -- scripts/common.sh@353 -- # local d=1 00:03:42.330 11:22:22 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.330 11:22:22 env -- scripts/common.sh@355 -- # echo 1 00:03:42.330 11:22:22 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.330 11:22:22 env -- scripts/common.sh@366 -- # decimal 2 00:03:42.330 11:22:22 env -- scripts/common.sh@353 -- # local d=2 00:03:42.330 11:22:22 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.330 11:22:22 env -- scripts/common.sh@355 -- # echo 2 00:03:42.330 11:22:22 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.330 11:22:22 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.330 11:22:22 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.330 11:22:22 env -- scripts/common.sh@368 -- # return 0 00:03:42.330 11:22:22 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.330 11:22:22 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:42.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.330 --rc genhtml_branch_coverage=1 00:03:42.330 --rc genhtml_function_coverage=1 00:03:42.330 --rc genhtml_legend=1 00:03:42.330 --rc geninfo_all_blocks=1 00:03:42.330 --rc geninfo_unexecuted_blocks=1 00:03:42.330 00:03:42.330 ' 00:03:42.330 11:22:22 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:42.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.330 --rc genhtml_branch_coverage=1 00:03:42.330 --rc genhtml_function_coverage=1 00:03:42.330 --rc genhtml_legend=1 00:03:42.330 --rc geninfo_all_blocks=1 00:03:42.330 --rc geninfo_unexecuted_blocks=1 00:03:42.330 00:03:42.330 ' 00:03:42.330 11:22:22 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:42.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.330 --rc genhtml_branch_coverage=1 00:03:42.330 --rc genhtml_function_coverage=1 00:03:42.330 --rc genhtml_legend=1 00:03:42.330 --rc geninfo_all_blocks=1 00:03:42.330 --rc geninfo_unexecuted_blocks=1 00:03:42.330 00:03:42.330 ' 00:03:42.330 11:22:22 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:42.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.330 --rc genhtml_branch_coverage=1 00:03:42.330 --rc genhtml_function_coverage=1 00:03:42.330 --rc genhtml_legend=1 00:03:42.330 --rc geninfo_all_blocks=1 00:03:42.330 --rc geninfo_unexecuted_blocks=1 00:03:42.330 00:03:42.330 ' 00:03:42.330 11:22:22 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:42.330 11:22:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.330 11:22:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.330 11:22:22 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.330 ************************************ 00:03:42.330 START TEST env_memory 00:03:42.330 ************************************ 00:03:42.330 11:22:22 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:42.330 00:03:42.330 00:03:42.330 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.330 http://cunit.sourceforge.net/ 00:03:42.330 00:03:42.330 00:03:42.330 Suite: memory 00:03:42.330 Test: alloc and free memory map ...[2024-11-15 11:22:22.699084] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:42.330 passed 00:03:42.330 Test: mem map translation ...[2024-11-15 11:22:22.719181] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:42.330 [2024-11-15 11:22:22.719205] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:42.330 [2024-11-15 11:22:22.719257] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:42.330 [2024-11-15 11:22:22.719269] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:42.330 passed 00:03:42.589 Test: mem map registration ...[2024-11-15 11:22:22.762342] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:42.589 [2024-11-15 11:22:22.762364] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:42.589 passed 00:03:42.589 Test: mem map adjacent registrations ...passed 00:03:42.589 00:03:42.589 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.589 suites 1 1 n/a 0 0 00:03:42.589 tests 4 4 4 0 0 00:03:42.589 asserts 152 152 152 0 n/a 00:03:42.589 00:03:42.589 Elapsed time = 0.145 seconds 00:03:42.589 00:03:42.589 real 0m0.154s 00:03:42.589 user 0m0.144s 00:03:42.589 sys 0m0.010s 00:03:42.589 11:22:22 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.589 11:22:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:42.589 ************************************ 00:03:42.589 END TEST env_memory 00:03:42.589 ************************************ 00:03:42.589 11:22:22 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:42.589 11:22:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.589 11:22:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.589 11:22:22 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.589 ************************************ 00:03:42.589 START TEST env_vtophys 00:03:42.590 ************************************ 00:03:42.590 11:22:22 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:42.590 EAL: lib.eal log level changed from notice to debug 00:03:42.590 EAL: Detected lcore 0 as core 0 on socket 0 00:03:42.590 EAL: Detected lcore 1 as core 1 on socket 0 00:03:42.590 EAL: Detected lcore 2 as core 2 on socket 0 00:03:42.590 EAL: Detected lcore 3 as core 3 on socket 0 00:03:42.590 EAL: Detected lcore 4 as core 4 on socket 0 00:03:42.590 EAL: Detected lcore 5 as core 5 on socket 0 00:03:42.590 EAL: Detected lcore 6 as core 8 on socket 0 00:03:42.590 EAL: Detected lcore 7 as core 9 on socket 0 00:03:42.590 EAL: Detected lcore 8 as core 10 on socket 0 00:03:42.590 EAL: Detected lcore 9 as core 11 on socket 0 00:03:42.590 EAL: Detected lcore 10 as core 12 on socket 0 00:03:42.590 EAL: Detected lcore 11 as core 13 on socket 0 00:03:42.590 EAL: Detected lcore 12 as core 0 on socket 1 00:03:42.590 EAL: Detected lcore 13 as core 1 on socket 1 00:03:42.590 EAL: Detected lcore 14 as core 2 on socket 1 00:03:42.590 EAL: Detected lcore 15 as core 3 on socket 1 00:03:42.590 EAL: Detected lcore 16 as core 4 on socket 1 00:03:42.590 EAL: Detected lcore 17 as core 5 on socket 1 00:03:42.590 EAL: Detected lcore 18 as core 8 on socket 1 00:03:42.590 EAL: Detected lcore 19 as core 9 on socket 1 00:03:42.590 EAL: Detected lcore 20 as core 10 on socket 1 00:03:42.590 EAL: Detected lcore 21 as core 11 on socket 1 00:03:42.590 EAL: Detected lcore 22 as core 12 on socket 1 00:03:42.590 EAL: Detected lcore 23 as core 13 on socket 1 00:03:42.590 EAL: Detected lcore 24 as core 0 on socket 0 00:03:42.590 EAL: Detected lcore 25 as core 1 on socket 0 00:03:42.590 EAL: Detected lcore 26 as core 2 on socket 0 00:03:42.590 EAL: Detected lcore 27 as core 3 on socket 0 00:03:42.590 EAL: Detected lcore 28 as core 4 on socket 0 00:03:42.590 EAL: Detected lcore 29 as core 5 on socket 0 00:03:42.590 EAL: Detected lcore 30 as core 8 on socket 0 00:03:42.590 EAL: Detected lcore 31 as core 9 on socket 0 00:03:42.590 EAL: Detected lcore 32 as core 10 on socket 0 00:03:42.590 EAL: Detected lcore 33 as core 11 on socket 0 00:03:42.590 EAL: Detected lcore 34 as core 12 on socket 0 00:03:42.590 EAL: Detected lcore 35 as core 13 on socket 0 00:03:42.590 EAL: Detected lcore 36 as core 0 on socket 1 00:03:42.590 EAL: Detected lcore 37 as core 1 on socket 1 00:03:42.590 EAL: Detected lcore 38 as core 2 on socket 1 00:03:42.590 EAL: Detected lcore 39 as core 3 on socket 1 00:03:42.590 EAL: Detected lcore 40 as core 4 on socket 1 00:03:42.590 EAL: Detected lcore 41 as core 5 on socket 1 00:03:42.590 EAL: Detected lcore 42 as core 8 on socket 1 00:03:42.590 EAL: Detected lcore 43 as core 9 on socket 1 00:03:42.590 EAL: Detected lcore 44 as core 10 on socket 1 00:03:42.590 EAL: Detected lcore 45 as core 11 on socket 1 00:03:42.590 EAL: Detected lcore 46 as core 12 on socket 1 00:03:42.590 EAL: Detected lcore 47 as core 13 on socket 1 00:03:42.590 EAL: Maximum logical cores by configuration: 128 00:03:42.590 EAL: Detected CPU lcores: 48 00:03:42.590 EAL: Detected NUMA nodes: 2 00:03:42.590 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:42.590 EAL: Detected shared linkage of DPDK 00:03:42.590 EAL: No shared files mode enabled, IPC will be disabled 00:03:42.590 EAL: Bus pci wants IOVA as 'DC' 00:03:42.590 EAL: Buses did not request a specific IOVA mode. 00:03:42.590 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:42.590 EAL: Selected IOVA mode 'VA' 00:03:42.590 EAL: Probing VFIO support... 00:03:42.590 EAL: IOMMU type 1 (Type 1) is supported 00:03:42.590 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:42.590 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:42.590 EAL: VFIO support initialized 00:03:42.590 EAL: Ask a virtual area of 0x2e000 bytes 00:03:42.590 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:42.590 EAL: Setting up physically contiguous memory... 00:03:42.590 EAL: Setting maximum number of open files to 524288 00:03:42.590 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:42.590 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:42.590 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:42.590 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.590 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:42.590 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.590 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.590 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:42.590 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:42.590 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.590 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:42.590 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.590 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.590 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:42.590 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:42.590 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.590 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:42.590 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.590 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.590 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:42.590 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:42.590 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.590 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:42.590 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.590 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.590 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:42.590 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:42.590 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:42.590 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.590 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:42.590 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:42.590 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.590 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:42.590 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:42.590 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.590 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:42.590 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:42.590 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.590 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:42.590 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:42.590 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.590 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:42.590 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:42.590 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.590 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:42.590 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:42.590 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.590 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:42.590 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:42.590 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.590 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:42.590 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:42.590 EAL: Hugepages will be freed exactly as allocated. 00:03:42.590 EAL: No shared files mode enabled, IPC is disabled 00:03:42.590 EAL: No shared files mode enabled, IPC is disabled 00:03:42.590 EAL: TSC frequency is ~2700000 KHz 00:03:42.590 EAL: Main lcore 0 is ready (tid=7f3cb8735a00;cpuset=[0]) 00:03:42.590 EAL: Trying to obtain current memory policy. 00:03:42.590 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.590 EAL: Restoring previous memory policy: 0 00:03:42.590 EAL: request: mp_malloc_sync 00:03:42.590 EAL: No shared files mode enabled, IPC is disabled 00:03:42.590 EAL: Heap on socket 0 was expanded by 2MB 00:03:42.590 EAL: No shared files mode enabled, IPC is disabled 00:03:42.590 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:42.590 EAL: Mem event callback 'spdk:(nil)' registered 00:03:42.590 00:03:42.590 00:03:42.590 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.590 http://cunit.sourceforge.net/ 00:03:42.590 00:03:42.590 00:03:42.590 Suite: components_suite 00:03:42.590 Test: vtophys_malloc_test ...passed 00:03:42.590 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:42.590 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.590 EAL: Restoring previous memory policy: 4 00:03:42.590 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.590 EAL: request: mp_malloc_sync 00:03:42.590 EAL: No shared files mode enabled, IPC is disabled 00:03:42.590 EAL: Heap on socket 0 was expanded by 4MB 00:03:42.590 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.590 EAL: request: mp_malloc_sync 00:03:42.590 EAL: No shared files mode enabled, IPC is disabled 00:03:42.590 EAL: Heap on socket 0 was shrunk by 4MB 00:03:42.590 EAL: Trying to obtain current memory policy. 00:03:42.590 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.590 EAL: Restoring previous memory policy: 4 00:03:42.590 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.590 EAL: request: mp_malloc_sync 00:03:42.590 EAL: No shared files mode enabled, IPC is disabled 00:03:42.590 EAL: Heap on socket 0 was expanded by 6MB 00:03:42.590 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.590 EAL: request: mp_malloc_sync 00:03:42.590 EAL: No shared files mode enabled, IPC is disabled 00:03:42.590 EAL: Heap on socket 0 was shrunk by 6MB 00:03:42.590 EAL: Trying to obtain current memory policy. 00:03:42.590 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.590 EAL: Restoring previous memory policy: 4 00:03:42.590 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.590 EAL: request: mp_malloc_sync 00:03:42.590 EAL: No shared files mode enabled, IPC is disabled 00:03:42.590 EAL: Heap on socket 0 was expanded by 10MB 00:03:42.590 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.590 EAL: request: mp_malloc_sync 00:03:42.590 EAL: No shared files mode enabled, IPC is disabled 00:03:42.590 EAL: Heap on socket 0 was shrunk by 10MB 00:03:42.590 EAL: Trying to obtain current memory policy. 00:03:42.590 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.590 EAL: Restoring previous memory policy: 4 00:03:42.590 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.590 EAL: request: mp_malloc_sync 00:03:42.590 EAL: No shared files mode enabled, IPC is disabled 00:03:42.590 EAL: Heap on socket 0 was expanded by 18MB 00:03:42.590 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.590 EAL: request: mp_malloc_sync 00:03:42.590 EAL: No shared files mode enabled, IPC is disabled 00:03:42.590 EAL: Heap on socket 0 was shrunk by 18MB 00:03:42.590 EAL: Trying to obtain current memory policy. 00:03:42.590 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.590 EAL: Restoring previous memory policy: 4 00:03:42.590 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.590 EAL: request: mp_malloc_sync 00:03:42.590 EAL: No shared files mode enabled, IPC is disabled 00:03:42.590 EAL: Heap on socket 0 was expanded by 34MB 00:03:42.590 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.590 EAL: request: mp_malloc_sync 00:03:42.590 EAL: No shared files mode enabled, IPC is disabled 00:03:42.590 EAL: Heap on socket 0 was shrunk by 34MB 00:03:42.590 EAL: Trying to obtain current memory policy. 00:03:42.590 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.590 EAL: Restoring previous memory policy: 4 00:03:42.590 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.590 EAL: request: mp_malloc_sync 00:03:42.590 EAL: No shared files mode enabled, IPC is disabled 00:03:42.590 EAL: Heap on socket 0 was expanded by 66MB 00:03:42.590 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.590 EAL: request: mp_malloc_sync 00:03:42.590 EAL: No shared files mode enabled, IPC is disabled 00:03:42.590 EAL: Heap on socket 0 was shrunk by 66MB 00:03:42.590 EAL: Trying to obtain current memory policy. 00:03:42.590 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.849 EAL: Restoring previous memory policy: 4 00:03:42.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.849 EAL: request: mp_malloc_sync 00:03:42.849 EAL: No shared files mode enabled, IPC is disabled 00:03:42.849 EAL: Heap on socket 0 was expanded by 130MB 00:03:42.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.849 EAL: request: mp_malloc_sync 00:03:42.849 EAL: No shared files mode enabled, IPC is disabled 00:03:42.849 EAL: Heap on socket 0 was shrunk by 130MB 00:03:42.849 EAL: Trying to obtain current memory policy. 00:03:42.849 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:42.849 EAL: Restoring previous memory policy: 4 00:03:42.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.849 EAL: request: mp_malloc_sync 00:03:42.849 EAL: No shared files mode enabled, IPC is disabled 00:03:42.849 EAL: Heap on socket 0 was expanded by 258MB 00:03:42.849 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.849 EAL: request: mp_malloc_sync 00:03:42.849 EAL: No shared files mode enabled, IPC is disabled 00:03:42.849 EAL: Heap on socket 0 was shrunk by 258MB 00:03:42.849 EAL: Trying to obtain current memory policy. 00:03:42.849 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.107 EAL: Restoring previous memory policy: 4 00:03:43.107 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.107 EAL: request: mp_malloc_sync 00:03:43.107 EAL: No shared files mode enabled, IPC is disabled 00:03:43.107 EAL: Heap on socket 0 was expanded by 514MB 00:03:43.107 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.365 EAL: request: mp_malloc_sync 00:03:43.365 EAL: No shared files mode enabled, IPC is disabled 00:03:43.365 EAL: Heap on socket 0 was shrunk by 514MB 00:03:43.365 EAL: Trying to obtain current memory policy. 00:03:43.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.623 EAL: Restoring previous memory policy: 4 00:03:43.623 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.623 EAL: request: mp_malloc_sync 00:03:43.623 EAL: No shared files mode enabled, IPC is disabled 00:03:43.623 EAL: Heap on socket 0 was expanded by 1026MB 00:03:43.881 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.139 EAL: request: mp_malloc_sync 00:03:44.139 EAL: No shared files mode enabled, IPC is disabled 00:03:44.139 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:44.139 passed 00:03:44.139 00:03:44.139 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.139 suites 1 1 n/a 0 0 00:03:44.139 tests 2 2 2 0 0 00:03:44.139 asserts 497 497 497 0 n/a 00:03:44.139 00:03:44.139 Elapsed time = 1.351 seconds 00:03:44.139 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.139 EAL: request: mp_malloc_sync 00:03:44.139 EAL: No shared files mode enabled, IPC is disabled 00:03:44.139 EAL: Heap on socket 0 was shrunk by 2MB 00:03:44.139 EAL: No shared files mode enabled, IPC is disabled 00:03:44.139 EAL: No shared files mode enabled, IPC is disabled 00:03:44.139 EAL: No shared files mode enabled, IPC is disabled 00:03:44.139 00:03:44.139 real 0m1.484s 00:03:44.139 user 0m0.866s 00:03:44.139 sys 0m0.571s 00:03:44.139 11:22:24 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.139 11:22:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:44.139 ************************************ 00:03:44.139 END TEST env_vtophys 00:03:44.139 ************************************ 00:03:44.139 11:22:24 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:44.139 11:22:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.139 11:22:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.139 11:22:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.139 ************************************ 00:03:44.139 START TEST env_pci 00:03:44.139 ************************************ 00:03:44.139 11:22:24 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:44.139 00:03:44.139 00:03:44.139 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.139 http://cunit.sourceforge.net/ 00:03:44.139 00:03:44.139 00:03:44.139 Suite: pci 00:03:44.139 Test: pci_hook ...[2024-11-15 11:22:24.412292] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2798221 has claimed it 00:03:44.139 EAL: Cannot find device (10000:00:01.0) 00:03:44.139 EAL: Failed to attach device on primary process 00:03:44.139 passed 00:03:44.139 00:03:44.139 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.139 suites 1 1 n/a 0 0 00:03:44.139 tests 1 1 1 0 0 00:03:44.139 asserts 25 25 25 0 n/a 00:03:44.139 00:03:44.139 Elapsed time = 0.022 seconds 00:03:44.139 00:03:44.139 real 0m0.036s 00:03:44.139 user 0m0.013s 00:03:44.139 sys 0m0.022s 00:03:44.139 11:22:24 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.139 11:22:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:44.139 ************************************ 00:03:44.139 END TEST env_pci 00:03:44.139 ************************************ 00:03:44.139 11:22:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:44.139 11:22:24 env -- env/env.sh@15 -- # uname 00:03:44.139 11:22:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:44.139 11:22:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:44.139 11:22:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:44.139 11:22:24 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:44.139 11:22:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.139 11:22:24 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.139 ************************************ 00:03:44.139 START TEST env_dpdk_post_init 00:03:44.139 ************************************ 00:03:44.140 11:22:24 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:44.140 EAL: Detected CPU lcores: 48 00:03:44.140 EAL: Detected NUMA nodes: 2 00:03:44.140 EAL: Detected shared linkage of DPDK 00:03:44.140 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:44.140 EAL: Selected IOVA mode 'VA' 00:03:44.140 EAL: VFIO support initialized 00:03:44.140 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:44.398 EAL: Using IOMMU type 1 (Type 1) 00:03:44.398 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:44.398 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:44.398 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:44.398 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:44.398 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:44.398 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:44.398 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:44.398 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:45.331 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:03:45.331 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:45.331 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:45.331 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:45.331 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:45.331 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:45.331 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:45.331 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:45.331 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:48.609 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:03:48.609 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:03:48.609 Starting DPDK initialization... 00:03:48.609 Starting SPDK post initialization... 00:03:48.609 SPDK NVMe probe 00:03:48.609 Attaching to 0000:0b:00.0 00:03:48.609 Attached to 0000:0b:00.0 00:03:48.609 Cleaning up... 00:03:48.609 00:03:48.609 real 0m4.378s 00:03:48.609 user 0m3.009s 00:03:48.609 sys 0m0.426s 00:03:48.609 11:22:28 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.609 11:22:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:48.609 ************************************ 00:03:48.609 END TEST env_dpdk_post_init 00:03:48.609 ************************************ 00:03:48.609 11:22:28 env -- env/env.sh@26 -- # uname 00:03:48.609 11:22:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:48.609 11:22:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.609 11:22:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.609 11:22:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.609 11:22:28 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.609 ************************************ 00:03:48.609 START TEST env_mem_callbacks 00:03:48.609 ************************************ 00:03:48.609 11:22:28 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.609 EAL: Detected CPU lcores: 48 00:03:48.609 EAL: Detected NUMA nodes: 2 00:03:48.609 EAL: Detected shared linkage of DPDK 00:03:48.609 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:48.609 EAL: Selected IOVA mode 'VA' 00:03:48.609 EAL: VFIO support initialized 00:03:48.609 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:48.609 00:03:48.609 00:03:48.609 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.609 http://cunit.sourceforge.net/ 00:03:48.609 00:03:48.609 00:03:48.609 Suite: memory 00:03:48.609 Test: test ... 00:03:48.609 register 0x200000200000 2097152 00:03:48.609 malloc 3145728 00:03:48.609 register 0x200000400000 4194304 00:03:48.609 buf 0x200000500000 len 3145728 PASSED 00:03:48.609 malloc 64 00:03:48.609 buf 0x2000004fff40 len 64 PASSED 00:03:48.609 malloc 4194304 00:03:48.609 register 0x200000800000 6291456 00:03:48.609 buf 0x200000a00000 len 4194304 PASSED 00:03:48.609 free 0x200000500000 3145728 00:03:48.609 free 0x2000004fff40 64 00:03:48.609 unregister 0x200000400000 4194304 PASSED 00:03:48.609 free 0x200000a00000 4194304 00:03:48.609 unregister 0x200000800000 6291456 PASSED 00:03:48.609 malloc 8388608 00:03:48.609 register 0x200000400000 10485760 00:03:48.609 buf 0x200000600000 len 8388608 PASSED 00:03:48.609 free 0x200000600000 8388608 00:03:48.609 unregister 0x200000400000 10485760 PASSED 00:03:48.609 passed 00:03:48.609 00:03:48.609 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.609 suites 1 1 n/a 0 0 00:03:48.609 tests 1 1 1 0 0 00:03:48.609 asserts 15 15 15 0 n/a 00:03:48.609 00:03:48.609 Elapsed time = 0.005 seconds 00:03:48.609 00:03:48.609 real 0m0.049s 00:03:48.609 user 0m0.014s 00:03:48.609 sys 0m0.035s 00:03:48.609 11:22:28 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.609 11:22:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:48.609 ************************************ 00:03:48.609 END TEST env_mem_callbacks 00:03:48.609 ************************************ 00:03:48.609 00:03:48.609 real 0m6.510s 00:03:48.609 user 0m4.253s 00:03:48.609 sys 0m1.291s 00:03:48.609 11:22:28 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:48.609 11:22:28 env -- common/autotest_common.sh@10 -- # set +x 00:03:48.609 ************************************ 00:03:48.609 END TEST env 00:03:48.609 ************************************ 00:03:48.609 11:22:29 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:48.609 11:22:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:48.609 11:22:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:48.609 11:22:29 -- common/autotest_common.sh@10 -- # set +x 00:03:48.867 ************************************ 00:03:48.867 START TEST rpc 00:03:48.867 ************************************ 00:03:48.867 11:22:29 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:48.867 * Looking for test storage... 00:03:48.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:48.867 11:22:29 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:48.867 11:22:29 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:48.867 11:22:29 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:48.867 11:22:29 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:48.867 11:22:29 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.867 11:22:29 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.867 11:22:29 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.867 11:22:29 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.867 11:22:29 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.867 11:22:29 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.867 11:22:29 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.867 11:22:29 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.867 11:22:29 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.867 11:22:29 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.867 11:22:29 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.867 11:22:29 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:48.867 11:22:29 rpc -- scripts/common.sh@345 -- # : 1 00:03:48.867 11:22:29 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.867 11:22:29 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.867 11:22:29 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:48.868 11:22:29 rpc -- scripts/common.sh@353 -- # local d=1 00:03:48.868 11:22:29 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.868 11:22:29 rpc -- scripts/common.sh@355 -- # echo 1 00:03:48.868 11:22:29 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.868 11:22:29 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:48.868 11:22:29 rpc -- scripts/common.sh@353 -- # local d=2 00:03:48.868 11:22:29 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.868 11:22:29 rpc -- scripts/common.sh@355 -- # echo 2 00:03:48.868 11:22:29 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.868 11:22:29 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.868 11:22:29 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.868 11:22:29 rpc -- scripts/common.sh@368 -- # return 0 00:03:48.868 11:22:29 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.868 11:22:29 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:48.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.868 --rc genhtml_branch_coverage=1 00:03:48.868 --rc genhtml_function_coverage=1 00:03:48.868 --rc genhtml_legend=1 00:03:48.868 --rc geninfo_all_blocks=1 00:03:48.868 --rc geninfo_unexecuted_blocks=1 00:03:48.868 00:03:48.868 ' 00:03:48.868 11:22:29 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:48.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.868 --rc genhtml_branch_coverage=1 00:03:48.868 --rc genhtml_function_coverage=1 00:03:48.868 --rc genhtml_legend=1 00:03:48.868 --rc geninfo_all_blocks=1 00:03:48.868 --rc geninfo_unexecuted_blocks=1 00:03:48.868 00:03:48.868 ' 00:03:48.868 11:22:29 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:48.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.868 --rc genhtml_branch_coverage=1 00:03:48.868 --rc genhtml_function_coverage=1 00:03:48.868 --rc genhtml_legend=1 00:03:48.868 --rc geninfo_all_blocks=1 00:03:48.868 --rc geninfo_unexecuted_blocks=1 00:03:48.868 00:03:48.868 ' 00:03:48.868 11:22:29 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:48.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.868 --rc genhtml_branch_coverage=1 00:03:48.868 --rc genhtml_function_coverage=1 00:03:48.868 --rc genhtml_legend=1 00:03:48.868 --rc geninfo_all_blocks=1 00:03:48.868 --rc geninfo_unexecuted_blocks=1 00:03:48.868 00:03:48.868 ' 00:03:48.868 11:22:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2799004 00:03:48.868 11:22:29 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:48.868 11:22:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:48.868 11:22:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2799004 00:03:48.868 11:22:29 rpc -- common/autotest_common.sh@835 -- # '[' -z 2799004 ']' 00:03:48.868 11:22:29 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.868 11:22:29 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:48.868 11:22:29 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.868 11:22:29 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:48.868 11:22:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.868 [2024-11-15 11:22:29.243730] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:03:48.868 [2024-11-15 11:22:29.243809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2799004 ] 00:03:49.126 [2024-11-15 11:22:29.310212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.126 [2024-11-15 11:22:29.369624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:49.126 [2024-11-15 11:22:29.369673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2799004' to capture a snapshot of events at runtime. 00:03:49.126 [2024-11-15 11:22:29.369687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:49.126 [2024-11-15 11:22:29.369698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:49.126 [2024-11-15 11:22:29.369707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2799004 for offline analysis/debug. 00:03:49.126 [2024-11-15 11:22:29.370321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.384 11:22:29 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:49.384 11:22:29 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:49.384 11:22:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.384 11:22:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.384 11:22:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:49.385 11:22:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:49.385 11:22:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.385 11:22:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.385 11:22:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.385 ************************************ 00:03:49.385 START TEST rpc_integrity 00:03:49.385 ************************************ 00:03:49.385 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:49.385 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:49.385 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.385 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.385 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.385 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:49.385 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:49.385 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:49.385 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:49.385 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.385 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.385 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.385 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:49.385 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:49.385 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.385 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.385 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.385 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:49.385 { 00:03:49.385 "name": "Malloc0", 00:03:49.385 "aliases": [ 00:03:49.385 "1a06185d-f008-49f4-aa8e-029e308a4ff2" 00:03:49.385 ], 00:03:49.385 "product_name": "Malloc disk", 00:03:49.385 "block_size": 512, 00:03:49.385 "num_blocks": 16384, 00:03:49.385 "uuid": "1a06185d-f008-49f4-aa8e-029e308a4ff2", 00:03:49.385 "assigned_rate_limits": { 00:03:49.385 "rw_ios_per_sec": 0, 00:03:49.385 "rw_mbytes_per_sec": 0, 00:03:49.385 "r_mbytes_per_sec": 0, 00:03:49.385 "w_mbytes_per_sec": 0 00:03:49.385 }, 00:03:49.385 "claimed": false, 00:03:49.385 "zoned": false, 00:03:49.385 "supported_io_types": { 00:03:49.385 "read": true, 00:03:49.385 "write": true, 00:03:49.385 "unmap": true, 00:03:49.385 "flush": true, 00:03:49.385 "reset": true, 00:03:49.385 "nvme_admin": false, 00:03:49.385 "nvme_io": false, 00:03:49.385 "nvme_io_md": false, 00:03:49.385 "write_zeroes": true, 00:03:49.385 "zcopy": true, 00:03:49.385 "get_zone_info": false, 00:03:49.385 "zone_management": false, 00:03:49.385 "zone_append": false, 00:03:49.385 "compare": false, 00:03:49.385 "compare_and_write": false, 00:03:49.385 "abort": true, 00:03:49.385 "seek_hole": false, 00:03:49.385 "seek_data": false, 00:03:49.385 "copy": true, 00:03:49.385 "nvme_iov_md": false 00:03:49.385 }, 00:03:49.385 "memory_domains": [ 00:03:49.385 { 00:03:49.385 "dma_device_id": "system", 00:03:49.385 "dma_device_type": 1 00:03:49.385 }, 00:03:49.385 { 00:03:49.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.385 "dma_device_type": 2 00:03:49.385 } 00:03:49.385 ], 00:03:49.385 "driver_specific": {} 00:03:49.385 } 00:03:49.385 ]' 00:03:49.385 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:49.385 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:49.385 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:49.385 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.385 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.385 [2024-11-15 11:22:29.771839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:49.385 [2024-11-15 11:22:29.771875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:49.385 [2024-11-15 11:22:29.771896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b9d740 00:03:49.385 [2024-11-15 11:22:29.771907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:49.385 [2024-11-15 11:22:29.773214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:49.385 [2024-11-15 11:22:29.773237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:49.385 Passthru0 00:03:49.385 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.385 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:49.385 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.385 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.385 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.385 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:49.385 { 00:03:49.385 "name": "Malloc0", 00:03:49.385 "aliases": [ 00:03:49.385 "1a06185d-f008-49f4-aa8e-029e308a4ff2" 00:03:49.385 ], 00:03:49.385 "product_name": "Malloc disk", 00:03:49.385 "block_size": 512, 00:03:49.385 "num_blocks": 16384, 00:03:49.385 "uuid": "1a06185d-f008-49f4-aa8e-029e308a4ff2", 00:03:49.385 "assigned_rate_limits": { 00:03:49.385 "rw_ios_per_sec": 0, 00:03:49.385 "rw_mbytes_per_sec": 0, 00:03:49.385 "r_mbytes_per_sec": 0, 00:03:49.385 "w_mbytes_per_sec": 0 00:03:49.385 }, 00:03:49.385 "claimed": true, 00:03:49.385 "claim_type": "exclusive_write", 00:03:49.385 "zoned": false, 00:03:49.385 "supported_io_types": { 00:03:49.385 "read": true, 00:03:49.385 "write": true, 00:03:49.385 "unmap": true, 00:03:49.385 "flush": true, 00:03:49.385 "reset": true, 00:03:49.385 "nvme_admin": false, 00:03:49.385 "nvme_io": false, 00:03:49.385 "nvme_io_md": false, 00:03:49.385 "write_zeroes": true, 00:03:49.385 "zcopy": true, 00:03:49.385 "get_zone_info": false, 00:03:49.385 "zone_management": false, 00:03:49.385 "zone_append": false, 00:03:49.385 "compare": false, 00:03:49.385 "compare_and_write": false, 00:03:49.385 "abort": true, 00:03:49.385 "seek_hole": false, 00:03:49.385 "seek_data": false, 00:03:49.385 "copy": true, 00:03:49.385 "nvme_iov_md": false 00:03:49.385 }, 00:03:49.385 "memory_domains": [ 00:03:49.385 { 00:03:49.385 "dma_device_id": "system", 00:03:49.385 "dma_device_type": 1 00:03:49.385 }, 00:03:49.385 { 00:03:49.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.385 "dma_device_type": 2 00:03:49.385 } 00:03:49.385 ], 00:03:49.385 "driver_specific": {} 00:03:49.385 }, 00:03:49.385 { 00:03:49.385 "name": "Passthru0", 00:03:49.385 "aliases": [ 00:03:49.385 "8157a6a8-e6b7-5d25-a14f-8a4809c9f832" 00:03:49.385 ], 00:03:49.385 "product_name": "passthru", 00:03:49.385 "block_size": 512, 00:03:49.385 "num_blocks": 16384, 00:03:49.385 "uuid": "8157a6a8-e6b7-5d25-a14f-8a4809c9f832", 00:03:49.385 "assigned_rate_limits": { 00:03:49.385 "rw_ios_per_sec": 0, 00:03:49.385 "rw_mbytes_per_sec": 0, 00:03:49.385 "r_mbytes_per_sec": 0, 00:03:49.385 "w_mbytes_per_sec": 0 00:03:49.385 }, 00:03:49.385 "claimed": false, 00:03:49.385 "zoned": false, 00:03:49.385 "supported_io_types": { 00:03:49.385 "read": true, 00:03:49.385 "write": true, 00:03:49.385 "unmap": true, 00:03:49.385 "flush": true, 00:03:49.385 "reset": true, 00:03:49.385 "nvme_admin": false, 00:03:49.385 "nvme_io": false, 00:03:49.385 "nvme_io_md": false, 00:03:49.385 "write_zeroes": true, 00:03:49.385 "zcopy": true, 00:03:49.385 "get_zone_info": false, 00:03:49.385 "zone_management": false, 00:03:49.385 "zone_append": false, 00:03:49.385 "compare": false, 00:03:49.385 "compare_and_write": false, 00:03:49.385 "abort": true, 00:03:49.385 "seek_hole": false, 00:03:49.385 "seek_data": false, 00:03:49.385 "copy": true, 00:03:49.385 "nvme_iov_md": false 00:03:49.385 }, 00:03:49.385 "memory_domains": [ 00:03:49.385 { 00:03:49.385 "dma_device_id": "system", 00:03:49.385 "dma_device_type": 1 00:03:49.385 }, 00:03:49.385 { 00:03:49.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.385 "dma_device_type": 2 00:03:49.385 } 00:03:49.385 ], 00:03:49.385 "driver_specific": { 00:03:49.385 "passthru": { 00:03:49.385 "name": "Passthru0", 00:03:49.385 "base_bdev_name": "Malloc0" 00:03:49.385 } 00:03:49.385 } 00:03:49.385 } 00:03:49.385 ]' 00:03:49.385 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:49.644 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:49.644 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:49.644 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.644 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.644 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.644 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:49.644 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.644 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.644 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.644 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:49.644 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.644 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.644 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.644 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:49.644 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:49.644 11:22:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:49.644 00:03:49.644 real 0m0.214s 00:03:49.644 user 0m0.137s 00:03:49.644 sys 0m0.020s 00:03:49.644 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.644 11:22:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:49.644 ************************************ 00:03:49.644 END TEST rpc_integrity 00:03:49.644 ************************************ 00:03:49.644 11:22:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:49.644 11:22:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.644 11:22:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.644 11:22:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.644 ************************************ 00:03:49.644 START TEST rpc_plugins 00:03:49.644 ************************************ 00:03:49.644 11:22:29 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:49.644 11:22:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:49.644 11:22:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.644 11:22:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.644 11:22:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.644 11:22:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:49.644 11:22:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:49.644 11:22:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.644 11:22:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.644 11:22:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.644 11:22:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:49.644 { 00:03:49.644 "name": "Malloc1", 00:03:49.644 "aliases": [ 00:03:49.644 "cbdfbf5c-ca97-494b-8273-d17fa637a639" 00:03:49.644 ], 00:03:49.644 "product_name": "Malloc disk", 00:03:49.644 "block_size": 4096, 00:03:49.644 "num_blocks": 256, 00:03:49.644 "uuid": "cbdfbf5c-ca97-494b-8273-d17fa637a639", 00:03:49.644 "assigned_rate_limits": { 00:03:49.644 "rw_ios_per_sec": 0, 00:03:49.644 "rw_mbytes_per_sec": 0, 00:03:49.644 "r_mbytes_per_sec": 0, 00:03:49.644 "w_mbytes_per_sec": 0 00:03:49.644 }, 00:03:49.644 "claimed": false, 00:03:49.644 "zoned": false, 00:03:49.644 "supported_io_types": { 00:03:49.645 "read": true, 00:03:49.645 "write": true, 00:03:49.645 "unmap": true, 00:03:49.645 "flush": true, 00:03:49.645 "reset": true, 00:03:49.645 "nvme_admin": false, 00:03:49.645 "nvme_io": false, 00:03:49.645 "nvme_io_md": false, 00:03:49.645 "write_zeroes": true, 00:03:49.645 "zcopy": true, 00:03:49.645 "get_zone_info": false, 00:03:49.645 "zone_management": false, 00:03:49.645 "zone_append": false, 00:03:49.645 "compare": false, 00:03:49.645 "compare_and_write": false, 00:03:49.645 "abort": true, 00:03:49.645 "seek_hole": false, 00:03:49.645 "seek_data": false, 00:03:49.645 "copy": true, 00:03:49.645 "nvme_iov_md": false 00:03:49.645 }, 00:03:49.645 "memory_domains": [ 00:03:49.645 { 00:03:49.645 "dma_device_id": "system", 00:03:49.645 "dma_device_type": 1 00:03:49.645 }, 00:03:49.645 { 00:03:49.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:49.645 "dma_device_type": 2 00:03:49.645 } 00:03:49.645 ], 00:03:49.645 "driver_specific": {} 00:03:49.645 } 00:03:49.645 ]' 00:03:49.645 11:22:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:49.645 11:22:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:49.645 11:22:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:49.645 11:22:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.645 11:22:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.645 11:22:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.645 11:22:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:49.645 11:22:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.645 11:22:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.645 11:22:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.645 11:22:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:49.645 11:22:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:49.645 11:22:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:49.645 00:03:49.645 real 0m0.114s 00:03:49.645 user 0m0.074s 00:03:49.645 sys 0m0.010s 00:03:49.645 11:22:30 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.645 11:22:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:49.645 ************************************ 00:03:49.645 END TEST rpc_plugins 00:03:49.645 ************************************ 00:03:49.645 11:22:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:49.645 11:22:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.645 11:22:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.645 11:22:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.903 ************************************ 00:03:49.903 START TEST rpc_trace_cmd_test 00:03:49.903 ************************************ 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:49.903 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2799004", 00:03:49.903 "tpoint_group_mask": "0x8", 00:03:49.903 "iscsi_conn": { 00:03:49.903 "mask": "0x2", 00:03:49.903 "tpoint_mask": "0x0" 00:03:49.903 }, 00:03:49.903 "scsi": { 00:03:49.903 "mask": "0x4", 00:03:49.903 "tpoint_mask": "0x0" 00:03:49.903 }, 00:03:49.903 "bdev": { 00:03:49.903 "mask": "0x8", 00:03:49.903 "tpoint_mask": "0xffffffffffffffff" 00:03:49.903 }, 00:03:49.903 "nvmf_rdma": { 00:03:49.903 "mask": "0x10", 00:03:49.903 "tpoint_mask": "0x0" 00:03:49.903 }, 00:03:49.903 "nvmf_tcp": { 00:03:49.903 "mask": "0x20", 00:03:49.903 "tpoint_mask": "0x0" 00:03:49.903 }, 00:03:49.903 "ftl": { 00:03:49.903 "mask": "0x40", 00:03:49.903 "tpoint_mask": "0x0" 00:03:49.903 }, 00:03:49.903 "blobfs": { 00:03:49.903 "mask": "0x80", 00:03:49.903 "tpoint_mask": "0x0" 00:03:49.903 }, 00:03:49.903 "dsa": { 00:03:49.903 "mask": "0x200", 00:03:49.903 "tpoint_mask": "0x0" 00:03:49.903 }, 00:03:49.903 "thread": { 00:03:49.903 "mask": "0x400", 00:03:49.903 "tpoint_mask": "0x0" 00:03:49.903 }, 00:03:49.903 "nvme_pcie": { 00:03:49.903 "mask": "0x800", 00:03:49.903 "tpoint_mask": "0x0" 00:03:49.903 }, 00:03:49.903 "iaa": { 00:03:49.903 "mask": "0x1000", 00:03:49.903 "tpoint_mask": "0x0" 00:03:49.903 }, 00:03:49.903 "nvme_tcp": { 00:03:49.903 "mask": "0x2000", 00:03:49.903 "tpoint_mask": "0x0" 00:03:49.903 }, 00:03:49.903 "bdev_nvme": { 00:03:49.903 "mask": "0x4000", 00:03:49.903 "tpoint_mask": "0x0" 00:03:49.903 }, 00:03:49.903 "sock": { 00:03:49.903 "mask": "0x8000", 00:03:49.903 "tpoint_mask": "0x0" 00:03:49.903 }, 00:03:49.903 "blob": { 00:03:49.903 "mask": "0x10000", 00:03:49.903 "tpoint_mask": "0x0" 00:03:49.903 }, 00:03:49.903 "bdev_raid": { 00:03:49.903 "mask": "0x20000", 00:03:49.903 "tpoint_mask": "0x0" 00:03:49.903 }, 00:03:49.903 "scheduler": { 00:03:49.903 "mask": "0x40000", 00:03:49.903 "tpoint_mask": "0x0" 00:03:49.903 } 00:03:49.903 }' 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:49.903 00:03:49.903 real 0m0.193s 00:03:49.903 user 0m0.164s 00:03:49.903 sys 0m0.019s 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.903 11:22:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:49.903 ************************************ 00:03:49.903 END TEST rpc_trace_cmd_test 00:03:49.903 ************************************ 00:03:49.903 11:22:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:49.903 11:22:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:49.903 11:22:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:49.903 11:22:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.903 11:22:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.903 11:22:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.162 ************************************ 00:03:50.162 START TEST rpc_daemon_integrity 00:03:50.162 ************************************ 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:50.162 { 00:03:50.162 "name": "Malloc2", 00:03:50.162 "aliases": [ 00:03:50.162 "c9b773cd-6247-4917-805c-6413e75a27fb" 00:03:50.162 ], 00:03:50.162 "product_name": "Malloc disk", 00:03:50.162 "block_size": 512, 00:03:50.162 "num_blocks": 16384, 00:03:50.162 "uuid": "c9b773cd-6247-4917-805c-6413e75a27fb", 00:03:50.162 "assigned_rate_limits": { 00:03:50.162 "rw_ios_per_sec": 0, 00:03:50.162 "rw_mbytes_per_sec": 0, 00:03:50.162 "r_mbytes_per_sec": 0, 00:03:50.162 "w_mbytes_per_sec": 0 00:03:50.162 }, 00:03:50.162 "claimed": false, 00:03:50.162 "zoned": false, 00:03:50.162 "supported_io_types": { 00:03:50.162 "read": true, 00:03:50.162 "write": true, 00:03:50.162 "unmap": true, 00:03:50.162 "flush": true, 00:03:50.162 "reset": true, 00:03:50.162 "nvme_admin": false, 00:03:50.162 "nvme_io": false, 00:03:50.162 "nvme_io_md": false, 00:03:50.162 "write_zeroes": true, 00:03:50.162 "zcopy": true, 00:03:50.162 "get_zone_info": false, 00:03:50.162 "zone_management": false, 00:03:50.162 "zone_append": false, 00:03:50.162 "compare": false, 00:03:50.162 "compare_and_write": false, 00:03:50.162 "abort": true, 00:03:50.162 "seek_hole": false, 00:03:50.162 "seek_data": false, 00:03:50.162 "copy": true, 00:03:50.162 "nvme_iov_md": false 00:03:50.162 }, 00:03:50.162 "memory_domains": [ 00:03:50.162 { 00:03:50.162 "dma_device_id": "system", 00:03:50.162 "dma_device_type": 1 00:03:50.162 }, 00:03:50.162 { 00:03:50.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.162 "dma_device_type": 2 00:03:50.162 } 00:03:50.162 ], 00:03:50.162 "driver_specific": {} 00:03:50.162 } 00:03:50.162 ]' 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.162 [2024-11-15 11:22:30.434272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:50.162 [2024-11-15 11:22:30.434334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:50.162 [2024-11-15 11:22:30.434370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b9dd20 00:03:50.162 [2024-11-15 11:22:30.434384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:50.162 [2024-11-15 11:22:30.435678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:50.162 [2024-11-15 11:22:30.435706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:50.162 Passthru0 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.162 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:50.162 { 00:03:50.162 "name": "Malloc2", 00:03:50.162 "aliases": [ 00:03:50.162 "c9b773cd-6247-4917-805c-6413e75a27fb" 00:03:50.162 ], 00:03:50.162 "product_name": "Malloc disk", 00:03:50.162 "block_size": 512, 00:03:50.162 "num_blocks": 16384, 00:03:50.162 "uuid": "c9b773cd-6247-4917-805c-6413e75a27fb", 00:03:50.162 "assigned_rate_limits": { 00:03:50.162 "rw_ios_per_sec": 0, 00:03:50.162 "rw_mbytes_per_sec": 0, 00:03:50.162 "r_mbytes_per_sec": 0, 00:03:50.162 "w_mbytes_per_sec": 0 00:03:50.162 }, 00:03:50.162 "claimed": true, 00:03:50.162 "claim_type": "exclusive_write", 00:03:50.162 "zoned": false, 00:03:50.162 "supported_io_types": { 00:03:50.162 "read": true, 00:03:50.162 "write": true, 00:03:50.162 "unmap": true, 00:03:50.162 "flush": true, 00:03:50.162 "reset": true, 00:03:50.162 "nvme_admin": false, 00:03:50.162 "nvme_io": false, 00:03:50.162 "nvme_io_md": false, 00:03:50.162 "write_zeroes": true, 00:03:50.162 "zcopy": true, 00:03:50.162 "get_zone_info": false, 00:03:50.162 "zone_management": false, 00:03:50.162 "zone_append": false, 00:03:50.162 "compare": false, 00:03:50.162 "compare_and_write": false, 00:03:50.162 "abort": true, 00:03:50.162 "seek_hole": false, 00:03:50.162 "seek_data": false, 00:03:50.162 "copy": true, 00:03:50.162 "nvme_iov_md": false 00:03:50.162 }, 00:03:50.162 "memory_domains": [ 00:03:50.162 { 00:03:50.162 "dma_device_id": "system", 00:03:50.162 "dma_device_type": 1 00:03:50.162 }, 00:03:50.162 { 00:03:50.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.162 "dma_device_type": 2 00:03:50.162 } 00:03:50.162 ], 00:03:50.162 "driver_specific": {} 00:03:50.162 }, 00:03:50.162 { 00:03:50.162 "name": "Passthru0", 00:03:50.162 "aliases": [ 00:03:50.162 "8c625614-a0ad-58b2-9a70-f63f3f3959d1" 00:03:50.162 ], 00:03:50.162 "product_name": "passthru", 00:03:50.162 "block_size": 512, 00:03:50.162 "num_blocks": 16384, 00:03:50.162 "uuid": "8c625614-a0ad-58b2-9a70-f63f3f3959d1", 00:03:50.162 "assigned_rate_limits": { 00:03:50.162 "rw_ios_per_sec": 0, 00:03:50.163 "rw_mbytes_per_sec": 0, 00:03:50.163 "r_mbytes_per_sec": 0, 00:03:50.163 "w_mbytes_per_sec": 0 00:03:50.163 }, 00:03:50.163 "claimed": false, 00:03:50.163 "zoned": false, 00:03:50.163 "supported_io_types": { 00:03:50.163 "read": true, 00:03:50.163 "write": true, 00:03:50.163 "unmap": true, 00:03:50.163 "flush": true, 00:03:50.163 "reset": true, 00:03:50.163 "nvme_admin": false, 00:03:50.163 "nvme_io": false, 00:03:50.163 "nvme_io_md": false, 00:03:50.163 "write_zeroes": true, 00:03:50.163 "zcopy": true, 00:03:50.163 "get_zone_info": false, 00:03:50.163 "zone_management": false, 00:03:50.163 "zone_append": false, 00:03:50.163 "compare": false, 00:03:50.163 "compare_and_write": false, 00:03:50.163 "abort": true, 00:03:50.163 "seek_hole": false, 00:03:50.163 "seek_data": false, 00:03:50.163 "copy": true, 00:03:50.163 "nvme_iov_md": false 00:03:50.163 }, 00:03:50.163 "memory_domains": [ 00:03:50.163 { 00:03:50.163 "dma_device_id": "system", 00:03:50.163 "dma_device_type": 1 00:03:50.163 }, 00:03:50.163 { 00:03:50.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.163 "dma_device_type": 2 00:03:50.163 } 00:03:50.163 ], 00:03:50.163 "driver_specific": { 00:03:50.163 "passthru": { 00:03:50.163 "name": "Passthru0", 00:03:50.163 "base_bdev_name": "Malloc2" 00:03:50.163 } 00:03:50.163 } 00:03:50.163 } 00:03:50.163 ]' 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:50.163 00:03:50.163 real 0m0.219s 00:03:50.163 user 0m0.141s 00:03:50.163 sys 0m0.020s 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.163 11:22:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.163 ************************************ 00:03:50.163 END TEST rpc_daemon_integrity 00:03:50.163 ************************************ 00:03:50.163 11:22:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:50.163 11:22:30 rpc -- rpc/rpc.sh@84 -- # killprocess 2799004 00:03:50.163 11:22:30 rpc -- common/autotest_common.sh@954 -- # '[' -z 2799004 ']' 00:03:50.163 11:22:30 rpc -- common/autotest_common.sh@958 -- # kill -0 2799004 00:03:50.163 11:22:30 rpc -- common/autotest_common.sh@959 -- # uname 00:03:50.163 11:22:30 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:50.163 11:22:30 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2799004 00:03:50.421 11:22:30 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:50.421 11:22:30 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:50.421 11:22:30 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2799004' 00:03:50.421 killing process with pid 2799004 00:03:50.421 11:22:30 rpc -- common/autotest_common.sh@973 -- # kill 2799004 00:03:50.421 11:22:30 rpc -- common/autotest_common.sh@978 -- # wait 2799004 00:03:50.679 00:03:50.679 real 0m1.988s 00:03:50.679 user 0m2.448s 00:03:50.679 sys 0m0.629s 00:03:50.679 11:22:31 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.679 11:22:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.679 ************************************ 00:03:50.679 END TEST rpc 00:03:50.679 ************************************ 00:03:50.679 11:22:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:50.679 11:22:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.679 11:22:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.679 11:22:31 -- common/autotest_common.sh@10 -- # set +x 00:03:50.679 ************************************ 00:03:50.679 START TEST skip_rpc 00:03:50.679 ************************************ 00:03:50.679 11:22:31 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:50.937 * Looking for test storage... 00:03:50.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:50.937 11:22:31 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:50.937 11:22:31 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:50.937 11:22:31 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:50.937 11:22:31 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.937 11:22:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:50.938 11:22:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.938 11:22:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.938 11:22:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.938 11:22:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:50.938 11:22:31 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.938 11:22:31 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:50.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.938 --rc genhtml_branch_coverage=1 00:03:50.938 --rc genhtml_function_coverage=1 00:03:50.938 --rc genhtml_legend=1 00:03:50.938 --rc geninfo_all_blocks=1 00:03:50.938 --rc geninfo_unexecuted_blocks=1 00:03:50.938 00:03:50.938 ' 00:03:50.938 11:22:31 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:50.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.938 --rc genhtml_branch_coverage=1 00:03:50.938 --rc genhtml_function_coverage=1 00:03:50.938 --rc genhtml_legend=1 00:03:50.938 --rc geninfo_all_blocks=1 00:03:50.938 --rc geninfo_unexecuted_blocks=1 00:03:50.938 00:03:50.938 ' 00:03:50.938 11:22:31 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:50.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.938 --rc genhtml_branch_coverage=1 00:03:50.938 --rc genhtml_function_coverage=1 00:03:50.938 --rc genhtml_legend=1 00:03:50.938 --rc geninfo_all_blocks=1 00:03:50.938 --rc geninfo_unexecuted_blocks=1 00:03:50.938 00:03:50.938 ' 00:03:50.938 11:22:31 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:50.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.938 --rc genhtml_branch_coverage=1 00:03:50.938 --rc genhtml_function_coverage=1 00:03:50.938 --rc genhtml_legend=1 00:03:50.938 --rc geninfo_all_blocks=1 00:03:50.938 --rc geninfo_unexecuted_blocks=1 00:03:50.938 00:03:50.938 ' 00:03:50.938 11:22:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:50.938 11:22:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:50.938 11:22:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:50.938 11:22:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.938 11:22:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.938 11:22:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.938 ************************************ 00:03:50.938 START TEST skip_rpc 00:03:50.938 ************************************ 00:03:50.938 11:22:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:50.938 11:22:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2799340 00:03:50.938 11:22:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:50.938 11:22:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:50.938 11:22:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:50.938 [2024-11-15 11:22:31.312405] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:03:50.938 [2024-11-15 11:22:31.312487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2799340 ] 00:03:51.196 [2024-11-15 11:22:31.383078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.196 [2024-11-15 11:22:31.442027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2799340 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2799340 ']' 00:03:56.461 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2799340 00:03:56.462 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:56.462 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:56.462 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2799340 00:03:56.462 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:56.462 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:56.462 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2799340' 00:03:56.462 killing process with pid 2799340 00:03:56.462 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2799340 00:03:56.462 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2799340 00:03:56.462 00:03:56.462 real 0m5.470s 00:03:56.462 user 0m5.179s 00:03:56.462 sys 0m0.312s 00:03:56.462 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.462 11:22:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.462 ************************************ 00:03:56.462 END TEST skip_rpc 00:03:56.462 ************************************ 00:03:56.462 11:22:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:56.462 11:22:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.462 11:22:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.462 11:22:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.462 ************************************ 00:03:56.462 START TEST skip_rpc_with_json 00:03:56.462 ************************************ 00:03:56.462 11:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:56.462 11:22:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:56.462 11:22:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2800019 00:03:56.462 11:22:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:56.462 11:22:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.462 11:22:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2800019 00:03:56.462 11:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2800019 ']' 00:03:56.462 11:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.462 11:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:56.462 11:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.462 11:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:56.462 11:22:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.462 [2024-11-15 11:22:36.830931] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:03:56.462 [2024-11-15 11:22:36.831033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2800019 ] 00:03:56.793 [2024-11-15 11:22:36.902423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.793 [2024-11-15 11:22:36.962444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.073 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:57.073 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:57.073 11:22:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:57.073 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.073 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.073 [2024-11-15 11:22:37.222015] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:57.073 request: 00:03:57.073 { 00:03:57.073 "trtype": "tcp", 00:03:57.073 "method": "nvmf_get_transports", 00:03:57.073 "req_id": 1 00:03:57.073 } 00:03:57.073 Got JSON-RPC error response 00:03:57.073 response: 00:03:57.073 { 00:03:57.073 "code": -19, 00:03:57.073 "message": "No such device" 00:03:57.073 } 00:03:57.073 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:57.073 11:22:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:57.073 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.073 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.073 [2024-11-15 11:22:37.230124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:57.073 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.073 11:22:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:57.073 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.073 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.073 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.073 11:22:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:57.073 { 00:03:57.073 "subsystems": [ 00:03:57.073 { 00:03:57.073 "subsystem": "fsdev", 00:03:57.073 "config": [ 00:03:57.073 { 00:03:57.073 "method": "fsdev_set_opts", 00:03:57.073 "params": { 00:03:57.073 "fsdev_io_pool_size": 65535, 00:03:57.073 "fsdev_io_cache_size": 256 00:03:57.073 } 00:03:57.073 } 00:03:57.073 ] 00:03:57.073 }, 00:03:57.073 { 00:03:57.073 "subsystem": "vfio_user_target", 00:03:57.073 "config": null 00:03:57.073 }, 00:03:57.073 { 00:03:57.073 "subsystem": "keyring", 00:03:57.073 "config": [] 00:03:57.073 }, 00:03:57.073 { 00:03:57.073 "subsystem": "iobuf", 00:03:57.073 "config": [ 00:03:57.073 { 00:03:57.073 "method": "iobuf_set_options", 00:03:57.073 "params": { 00:03:57.073 "small_pool_count": 8192, 00:03:57.073 "large_pool_count": 1024, 00:03:57.073 "small_bufsize": 8192, 00:03:57.073 "large_bufsize": 135168, 00:03:57.073 "enable_numa": false 00:03:57.073 } 00:03:57.073 } 00:03:57.073 ] 00:03:57.073 }, 00:03:57.073 { 00:03:57.073 "subsystem": "sock", 00:03:57.073 "config": [ 00:03:57.073 { 00:03:57.073 "method": "sock_set_default_impl", 00:03:57.073 "params": { 00:03:57.073 "impl_name": "posix" 00:03:57.073 } 00:03:57.073 }, 00:03:57.073 { 00:03:57.073 "method": "sock_impl_set_options", 00:03:57.073 "params": { 00:03:57.073 "impl_name": "ssl", 00:03:57.073 "recv_buf_size": 4096, 00:03:57.073 "send_buf_size": 4096, 00:03:57.073 "enable_recv_pipe": true, 00:03:57.073 "enable_quickack": false, 00:03:57.073 "enable_placement_id": 0, 00:03:57.073 "enable_zerocopy_send_server": true, 00:03:57.073 "enable_zerocopy_send_client": false, 00:03:57.073 "zerocopy_threshold": 0, 00:03:57.073 "tls_version": 0, 00:03:57.073 "enable_ktls": false 00:03:57.073 } 00:03:57.073 }, 00:03:57.073 { 00:03:57.073 "method": "sock_impl_set_options", 00:03:57.073 "params": { 00:03:57.073 "impl_name": "posix", 00:03:57.073 "recv_buf_size": 2097152, 00:03:57.073 "send_buf_size": 2097152, 00:03:57.073 "enable_recv_pipe": true, 00:03:57.073 "enable_quickack": false, 00:03:57.073 "enable_placement_id": 0, 00:03:57.073 "enable_zerocopy_send_server": true, 00:03:57.073 "enable_zerocopy_send_client": false, 00:03:57.073 "zerocopy_threshold": 0, 00:03:57.073 "tls_version": 0, 00:03:57.073 "enable_ktls": false 00:03:57.073 } 00:03:57.073 } 00:03:57.073 ] 00:03:57.073 }, 00:03:57.073 { 00:03:57.073 "subsystem": "vmd", 00:03:57.073 "config": [] 00:03:57.073 }, 00:03:57.073 { 00:03:57.073 "subsystem": "accel", 00:03:57.073 "config": [ 00:03:57.073 { 00:03:57.073 "method": "accel_set_options", 00:03:57.073 "params": { 00:03:57.073 "small_cache_size": 128, 00:03:57.073 "large_cache_size": 16, 00:03:57.073 "task_count": 2048, 00:03:57.073 "sequence_count": 2048, 00:03:57.073 "buf_count": 2048 00:03:57.073 } 00:03:57.073 } 00:03:57.073 ] 00:03:57.073 }, 00:03:57.073 { 00:03:57.073 "subsystem": "bdev", 00:03:57.073 "config": [ 00:03:57.073 { 00:03:57.074 "method": "bdev_set_options", 00:03:57.074 "params": { 00:03:57.074 "bdev_io_pool_size": 65535, 00:03:57.074 "bdev_io_cache_size": 256, 00:03:57.074 "bdev_auto_examine": true, 00:03:57.074 "iobuf_small_cache_size": 128, 00:03:57.074 "iobuf_large_cache_size": 16 00:03:57.074 } 00:03:57.074 }, 00:03:57.074 { 00:03:57.074 "method": "bdev_raid_set_options", 00:03:57.074 "params": { 00:03:57.074 "process_window_size_kb": 1024, 00:03:57.074 "process_max_bandwidth_mb_sec": 0 00:03:57.074 } 00:03:57.074 }, 00:03:57.074 { 00:03:57.074 "method": "bdev_iscsi_set_options", 00:03:57.074 "params": { 00:03:57.074 "timeout_sec": 30 00:03:57.074 } 00:03:57.074 }, 00:03:57.074 { 00:03:57.074 "method": "bdev_nvme_set_options", 00:03:57.074 "params": { 00:03:57.074 "action_on_timeout": "none", 00:03:57.074 "timeout_us": 0, 00:03:57.074 "timeout_admin_us": 0, 00:03:57.074 "keep_alive_timeout_ms": 10000, 00:03:57.074 "arbitration_burst": 0, 00:03:57.074 "low_priority_weight": 0, 00:03:57.074 "medium_priority_weight": 0, 00:03:57.074 "high_priority_weight": 0, 00:03:57.074 "nvme_adminq_poll_period_us": 10000, 00:03:57.074 "nvme_ioq_poll_period_us": 0, 00:03:57.074 "io_queue_requests": 0, 00:03:57.074 "delay_cmd_submit": true, 00:03:57.074 "transport_retry_count": 4, 00:03:57.074 "bdev_retry_count": 3, 00:03:57.074 "transport_ack_timeout": 0, 00:03:57.074 "ctrlr_loss_timeout_sec": 0, 00:03:57.074 "reconnect_delay_sec": 0, 00:03:57.074 "fast_io_fail_timeout_sec": 0, 00:03:57.074 "disable_auto_failback": false, 00:03:57.074 "generate_uuids": false, 00:03:57.074 "transport_tos": 0, 00:03:57.074 "nvme_error_stat": false, 00:03:57.074 "rdma_srq_size": 0, 00:03:57.074 "io_path_stat": false, 00:03:57.074 "allow_accel_sequence": false, 00:03:57.074 "rdma_max_cq_size": 0, 00:03:57.074 "rdma_cm_event_timeout_ms": 0, 00:03:57.074 "dhchap_digests": [ 00:03:57.074 "sha256", 00:03:57.074 "sha384", 00:03:57.074 "sha512" 00:03:57.074 ], 00:03:57.074 "dhchap_dhgroups": [ 00:03:57.074 "null", 00:03:57.074 "ffdhe2048", 00:03:57.074 "ffdhe3072", 00:03:57.074 "ffdhe4096", 00:03:57.074 "ffdhe6144", 00:03:57.074 "ffdhe8192" 00:03:57.074 ] 00:03:57.074 } 00:03:57.074 }, 00:03:57.074 { 00:03:57.074 "method": "bdev_nvme_set_hotplug", 00:03:57.074 "params": { 00:03:57.074 "period_us": 100000, 00:03:57.074 "enable": false 00:03:57.074 } 00:03:57.074 }, 00:03:57.074 { 00:03:57.074 "method": "bdev_wait_for_examine" 00:03:57.074 } 00:03:57.074 ] 00:03:57.074 }, 00:03:57.074 { 00:03:57.074 "subsystem": "scsi", 00:03:57.074 "config": null 00:03:57.074 }, 00:03:57.074 { 00:03:57.074 "subsystem": "scheduler", 00:03:57.074 "config": [ 00:03:57.074 { 00:03:57.074 "method": "framework_set_scheduler", 00:03:57.074 "params": { 00:03:57.074 "name": "static" 00:03:57.074 } 00:03:57.074 } 00:03:57.074 ] 00:03:57.074 }, 00:03:57.074 { 00:03:57.074 "subsystem": "vhost_scsi", 00:03:57.074 "config": [] 00:03:57.074 }, 00:03:57.074 { 00:03:57.074 "subsystem": "vhost_blk", 00:03:57.074 "config": [] 00:03:57.074 }, 00:03:57.074 { 00:03:57.074 "subsystem": "ublk", 00:03:57.074 "config": [] 00:03:57.074 }, 00:03:57.074 { 00:03:57.074 "subsystem": "nbd", 00:03:57.074 "config": [] 00:03:57.074 }, 00:03:57.074 { 00:03:57.074 "subsystem": "nvmf", 00:03:57.074 "config": [ 00:03:57.074 { 00:03:57.074 "method": "nvmf_set_config", 00:03:57.074 "params": { 00:03:57.074 "discovery_filter": "match_any", 00:03:57.074 "admin_cmd_passthru": { 00:03:57.074 "identify_ctrlr": false 00:03:57.074 }, 00:03:57.074 "dhchap_digests": [ 00:03:57.074 "sha256", 00:03:57.074 "sha384", 00:03:57.074 "sha512" 00:03:57.074 ], 00:03:57.074 "dhchap_dhgroups": [ 00:03:57.074 "null", 00:03:57.074 "ffdhe2048", 00:03:57.074 "ffdhe3072", 00:03:57.074 "ffdhe4096", 00:03:57.074 "ffdhe6144", 00:03:57.074 "ffdhe8192" 00:03:57.074 ] 00:03:57.074 } 00:03:57.074 }, 00:03:57.074 { 00:03:57.074 "method": "nvmf_set_max_subsystems", 00:03:57.074 "params": { 00:03:57.074 "max_subsystems": 1024 00:03:57.074 } 00:03:57.074 }, 00:03:57.074 { 00:03:57.074 "method": "nvmf_set_crdt", 00:03:57.074 "params": { 00:03:57.074 "crdt1": 0, 00:03:57.074 "crdt2": 0, 00:03:57.074 "crdt3": 0 00:03:57.074 } 00:03:57.074 }, 00:03:57.074 { 00:03:57.074 "method": "nvmf_create_transport", 00:03:57.074 "params": { 00:03:57.074 "trtype": "TCP", 00:03:57.074 "max_queue_depth": 128, 00:03:57.074 "max_io_qpairs_per_ctrlr": 127, 00:03:57.074 "in_capsule_data_size": 4096, 00:03:57.074 "max_io_size": 131072, 00:03:57.074 "io_unit_size": 131072, 00:03:57.074 "max_aq_depth": 128, 00:03:57.074 "num_shared_buffers": 511, 00:03:57.074 "buf_cache_size": 4294967295, 00:03:57.074 "dif_insert_or_strip": false, 00:03:57.074 "zcopy": false, 00:03:57.074 "c2h_success": true, 00:03:57.074 "sock_priority": 0, 00:03:57.074 "abort_timeout_sec": 1, 00:03:57.074 "ack_timeout": 0, 00:03:57.074 "data_wr_pool_size": 0 00:03:57.074 } 00:03:57.074 } 00:03:57.074 ] 00:03:57.074 }, 00:03:57.074 { 00:03:57.074 "subsystem": "iscsi", 00:03:57.074 "config": [ 00:03:57.074 { 00:03:57.074 "method": "iscsi_set_options", 00:03:57.074 "params": { 00:03:57.074 "node_base": "iqn.2016-06.io.spdk", 00:03:57.074 "max_sessions": 128, 00:03:57.074 "max_connections_per_session": 2, 00:03:57.074 "max_queue_depth": 64, 00:03:57.074 "default_time2wait": 2, 00:03:57.074 "default_time2retain": 20, 00:03:57.074 "first_burst_length": 8192, 00:03:57.074 "immediate_data": true, 00:03:57.074 "allow_duplicated_isid": false, 00:03:57.074 "error_recovery_level": 0, 00:03:57.074 "nop_timeout": 60, 00:03:57.074 "nop_in_interval": 30, 00:03:57.074 "disable_chap": false, 00:03:57.074 "require_chap": false, 00:03:57.074 "mutual_chap": false, 00:03:57.074 "chap_group": 0, 00:03:57.074 "max_large_datain_per_connection": 64, 00:03:57.074 "max_r2t_per_connection": 4, 00:03:57.074 "pdu_pool_size": 36864, 00:03:57.074 "immediate_data_pool_size": 16384, 00:03:57.074 "data_out_pool_size": 2048 00:03:57.074 } 00:03:57.074 } 00:03:57.074 ] 00:03:57.074 } 00:03:57.074 ] 00:03:57.074 } 00:03:57.074 11:22:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:57.074 11:22:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2800019 00:03:57.074 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2800019 ']' 00:03:57.074 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2800019 00:03:57.074 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:57.074 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:57.074 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2800019 00:03:57.074 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:57.074 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:57.074 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2800019' 00:03:57.074 killing process with pid 2800019 00:03:57.074 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2800019 00:03:57.074 11:22:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2800019 00:03:57.639 11:22:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2800162 00:03:57.640 11:22:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:57.640 11:22:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:02.901 11:22:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2800162 00:04:02.901 11:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2800162 ']' 00:04:02.901 11:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2800162 00:04:02.901 11:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:02.901 11:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:02.901 11:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2800162 00:04:02.901 11:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:02.901 11:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:02.901 11:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2800162' 00:04:02.901 killing process with pid 2800162 00:04:02.901 11:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2800162 00:04:02.901 11:22:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2800162 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:03.160 00:04:03.160 real 0m6.554s 00:04:03.160 user 0m6.213s 00:04:03.160 sys 0m0.667s 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:03.160 ************************************ 00:04:03.160 END TEST skip_rpc_with_json 00:04:03.160 ************************************ 00:04:03.160 11:22:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:03.160 11:22:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.160 11:22:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.160 11:22:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.160 ************************************ 00:04:03.160 START TEST skip_rpc_with_delay 00:04:03.160 ************************************ 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:03.160 [2024-11-15 11:22:43.448131] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:03.160 00:04:03.160 real 0m0.075s 00:04:03.160 user 0m0.048s 00:04:03.160 sys 0m0.026s 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.160 11:22:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:03.160 ************************************ 00:04:03.160 END TEST skip_rpc_with_delay 00:04:03.160 ************************************ 00:04:03.160 11:22:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:03.160 11:22:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:03.160 11:22:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:03.160 11:22:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.160 11:22:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.160 11:22:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.160 ************************************ 00:04:03.160 START TEST exit_on_failed_rpc_init 00:04:03.160 ************************************ 00:04:03.160 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:03.160 11:22:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2800886 00:04:03.160 11:22:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2800886 00:04:03.160 11:22:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:03.160 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2800886 ']' 00:04:03.160 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.160 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:03.160 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.160 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:03.160 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:03.160 [2024-11-15 11:22:43.573150] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:03.160 [2024-11-15 11:22:43.573230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2800886 ] 00:04:03.419 [2024-11-15 11:22:43.641507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.419 [2024-11-15 11:22:43.700360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.677 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.677 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:03.677 11:22:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:03.677 11:22:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.677 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:03.677 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.677 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.677 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.677 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.678 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.678 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.678 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:03.678 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.678 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:03.678 11:22:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:03.678 [2024-11-15 11:22:44.022378] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:03.678 [2024-11-15 11:22:44.022451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2801008 ] 00:04:03.678 [2024-11-15 11:22:44.087391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.935 [2024-11-15 11:22:44.147702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.935 [2024-11-15 11:22:44.147814] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:03.935 [2024-11-15 11:22:44.147833] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:03.935 [2024-11-15 11:22:44.147845] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:03.935 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:03.935 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:03.935 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:03.935 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:03.935 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:03.935 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:03.935 11:22:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:03.936 11:22:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2800886 00:04:03.936 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2800886 ']' 00:04:03.936 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2800886 00:04:03.936 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:03.936 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.936 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2800886 00:04:03.936 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:03.936 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:03.936 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2800886' 00:04:03.936 killing process with pid 2800886 00:04:03.936 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2800886 00:04:03.936 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2800886 00:04:04.500 00:04:04.500 real 0m1.162s 00:04:04.500 user 0m1.281s 00:04:04.500 sys 0m0.438s 00:04:04.500 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.500 11:22:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.500 ************************************ 00:04:04.500 END TEST exit_on_failed_rpc_init 00:04:04.500 ************************************ 00:04:04.500 11:22:44 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:04.500 00:04:04.500 real 0m13.621s 00:04:04.500 user 0m12.900s 00:04:04.500 sys 0m1.645s 00:04:04.500 11:22:44 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.500 11:22:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.500 ************************************ 00:04:04.500 END TEST skip_rpc 00:04:04.500 ************************************ 00:04:04.500 11:22:44 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:04.500 11:22:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.500 11:22:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.500 11:22:44 -- common/autotest_common.sh@10 -- # set +x 00:04:04.500 ************************************ 00:04:04.500 START TEST rpc_client 00:04:04.500 ************************************ 00:04:04.500 11:22:44 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:04.500 * Looking for test storage... 00:04:04.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:04.500 11:22:44 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:04.500 11:22:44 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:04.500 11:22:44 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:04.500 11:22:44 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.500 11:22:44 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:04.500 11:22:44 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.500 11:22:44 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:04.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.500 --rc genhtml_branch_coverage=1 00:04:04.500 --rc genhtml_function_coverage=1 00:04:04.500 --rc genhtml_legend=1 00:04:04.500 --rc geninfo_all_blocks=1 00:04:04.500 --rc geninfo_unexecuted_blocks=1 00:04:04.500 00:04:04.500 ' 00:04:04.500 11:22:44 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:04.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.500 --rc genhtml_branch_coverage=1 00:04:04.500 --rc genhtml_function_coverage=1 00:04:04.500 --rc genhtml_legend=1 00:04:04.500 --rc geninfo_all_blocks=1 00:04:04.500 --rc geninfo_unexecuted_blocks=1 00:04:04.500 00:04:04.500 ' 00:04:04.500 11:22:44 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:04.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.500 --rc genhtml_branch_coverage=1 00:04:04.500 --rc genhtml_function_coverage=1 00:04:04.500 --rc genhtml_legend=1 00:04:04.500 --rc geninfo_all_blocks=1 00:04:04.500 --rc geninfo_unexecuted_blocks=1 00:04:04.500 00:04:04.500 ' 00:04:04.501 11:22:44 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:04.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.501 --rc genhtml_branch_coverage=1 00:04:04.501 --rc genhtml_function_coverage=1 00:04:04.501 --rc genhtml_legend=1 00:04:04.501 --rc geninfo_all_blocks=1 00:04:04.501 --rc geninfo_unexecuted_blocks=1 00:04:04.501 00:04:04.501 ' 00:04:04.501 11:22:44 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:04.501 OK 00:04:04.501 11:22:44 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:04.501 00:04:04.501 real 0m0.166s 00:04:04.501 user 0m0.104s 00:04:04.501 sys 0m0.070s 00:04:04.501 11:22:44 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.759 11:22:44 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:04.759 ************************************ 00:04:04.759 END TEST rpc_client 00:04:04.759 ************************************ 00:04:04.759 11:22:44 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:04.759 11:22:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.759 11:22:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.759 11:22:44 -- common/autotest_common.sh@10 -- # set +x 00:04:04.759 ************************************ 00:04:04.759 START TEST json_config 00:04:04.759 ************************************ 00:04:04.759 11:22:44 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:04.759 11:22:45 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:04.759 11:22:45 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:04.759 11:22:45 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:04.759 11:22:45 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:04.759 11:22:45 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.759 11:22:45 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.759 11:22:45 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.759 11:22:45 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.759 11:22:45 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.759 11:22:45 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.759 11:22:45 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.759 11:22:45 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.759 11:22:45 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.759 11:22:45 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.759 11:22:45 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.759 11:22:45 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:04.759 11:22:45 json_config -- scripts/common.sh@345 -- # : 1 00:04:04.759 11:22:45 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.759 11:22:45 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.759 11:22:45 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:04.759 11:22:45 json_config -- scripts/common.sh@353 -- # local d=1 00:04:04.759 11:22:45 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.759 11:22:45 json_config -- scripts/common.sh@355 -- # echo 1 00:04:04.759 11:22:45 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.759 11:22:45 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:04.759 11:22:45 json_config -- scripts/common.sh@353 -- # local d=2 00:04:04.759 11:22:45 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.759 11:22:45 json_config -- scripts/common.sh@355 -- # echo 2 00:04:04.759 11:22:45 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.759 11:22:45 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.759 11:22:45 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.759 11:22:45 json_config -- scripts/common.sh@368 -- # return 0 00:04:04.759 11:22:45 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.759 11:22:45 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:04.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.759 --rc genhtml_branch_coverage=1 00:04:04.759 --rc genhtml_function_coverage=1 00:04:04.759 --rc genhtml_legend=1 00:04:04.759 --rc geninfo_all_blocks=1 00:04:04.759 --rc geninfo_unexecuted_blocks=1 00:04:04.759 00:04:04.759 ' 00:04:04.759 11:22:45 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:04.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.759 --rc genhtml_branch_coverage=1 00:04:04.760 --rc genhtml_function_coverage=1 00:04:04.760 --rc genhtml_legend=1 00:04:04.760 --rc geninfo_all_blocks=1 00:04:04.760 --rc geninfo_unexecuted_blocks=1 00:04:04.760 00:04:04.760 ' 00:04:04.760 11:22:45 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:04.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.760 --rc genhtml_branch_coverage=1 00:04:04.760 --rc genhtml_function_coverage=1 00:04:04.760 --rc genhtml_legend=1 00:04:04.760 --rc geninfo_all_blocks=1 00:04:04.760 --rc geninfo_unexecuted_blocks=1 00:04:04.760 00:04:04.760 ' 00:04:04.760 11:22:45 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:04.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.760 --rc genhtml_branch_coverage=1 00:04:04.760 --rc genhtml_function_coverage=1 00:04:04.760 --rc genhtml_legend=1 00:04:04.760 --rc geninfo_all_blocks=1 00:04:04.760 --rc geninfo_unexecuted_blocks=1 00:04:04.760 00:04:04.760 ' 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:04.760 11:22:45 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:04.760 11:22:45 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:04.760 11:22:45 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.760 11:22:45 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.760 11:22:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.760 11:22:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.760 11:22:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.760 11:22:45 json_config -- paths/export.sh@5 -- # export PATH 00:04:04.760 11:22:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@51 -- # : 0 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:04.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:04.760 11:22:45 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:04.760 INFO: JSON configuration test init 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:04.760 11:22:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.760 11:22:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:04.760 11:22:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.760 11:22:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.760 11:22:45 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:04.760 11:22:45 json_config -- json_config/common.sh@9 -- # local app=target 00:04:04.760 11:22:45 json_config -- json_config/common.sh@10 -- # shift 00:04:04.760 11:22:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:04.760 11:22:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:04.760 11:22:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:04.760 11:22:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.760 11:22:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.760 11:22:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2801268 00:04:04.760 11:22:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:04.760 11:22:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:04.760 Waiting for target to run... 00:04:04.760 11:22:45 json_config -- json_config/common.sh@25 -- # waitforlisten 2801268 /var/tmp/spdk_tgt.sock 00:04:04.760 11:22:45 json_config -- common/autotest_common.sh@835 -- # '[' -z 2801268 ']' 00:04:04.760 11:22:45 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:04.760 11:22:45 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:04.760 11:22:45 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:04.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:04.760 11:22:45 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:04.760 11:22:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.760 [2024-11-15 11:22:45.167157] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:04.760 [2024-11-15 11:22:45.167232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2801268 ] 00:04:05.326 [2024-11-15 11:22:45.509771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.326 [2024-11-15 11:22:45.551692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.891 11:22:46 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:05.891 11:22:46 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:05.891 11:22:46 json_config -- json_config/common.sh@26 -- # echo '' 00:04:05.891 00:04:05.891 11:22:46 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:05.891 11:22:46 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:05.891 11:22:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.891 11:22:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.891 11:22:46 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:05.891 11:22:46 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:05.891 11:22:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:05.891 11:22:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.891 11:22:46 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:05.891 11:22:46 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:05.891 11:22:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:09.177 11:22:49 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:09.177 11:22:49 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:09.177 11:22:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.177 11:22:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.177 11:22:49 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:09.177 11:22:49 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:09.178 11:22:49 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:09.178 11:22:49 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:09.178 11:22:49 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:09.178 11:22:49 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:09.178 11:22:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:09.178 11:22:49 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@54 -- # sort 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:09.436 11:22:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:09.436 11:22:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:09.436 11:22:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.436 11:22:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:09.436 11:22:49 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:09.436 11:22:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:09.694 MallocForNvmf0 00:04:09.694 11:22:49 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:09.694 11:22:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:09.952 MallocForNvmf1 00:04:09.952 11:22:50 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:09.952 11:22:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:10.210 [2024-11-15 11:22:50.454263] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:10.210 11:22:50 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:10.210 11:22:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:10.467 11:22:50 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:10.467 11:22:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:10.724 11:22:51 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:10.724 11:22:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:10.981 11:22:51 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:10.981 11:22:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:11.237 [2024-11-15 11:22:51.509581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:11.237 11:22:51 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:11.237 11:22:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.237 11:22:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.237 11:22:51 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:11.237 11:22:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.237 11:22:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.237 11:22:51 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:11.237 11:22:51 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:11.237 11:22:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:11.494 MallocBdevForConfigChangeCheck 00:04:11.494 11:22:51 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:11.494 11:22:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.494 11:22:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.494 11:22:51 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:11.494 11:22:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:12.060 11:22:52 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:12.060 INFO: shutting down applications... 00:04:12.060 11:22:52 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:12.060 11:22:52 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:12.060 11:22:52 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:12.060 11:22:52 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:13.432 Calling clear_iscsi_subsystem 00:04:13.432 Calling clear_nvmf_subsystem 00:04:13.432 Calling clear_nbd_subsystem 00:04:13.432 Calling clear_ublk_subsystem 00:04:13.432 Calling clear_vhost_blk_subsystem 00:04:13.432 Calling clear_vhost_scsi_subsystem 00:04:13.432 Calling clear_bdev_subsystem 00:04:13.432 11:22:53 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:13.432 11:22:53 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:13.432 11:22:53 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:13.432 11:22:53 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:13.432 11:22:53 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:13.432 11:22:53 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:13.997 11:22:54 json_config -- json_config/json_config.sh@352 -- # break 00:04:13.997 11:22:54 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:13.997 11:22:54 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:13.997 11:22:54 json_config -- json_config/common.sh@31 -- # local app=target 00:04:13.997 11:22:54 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:13.997 11:22:54 json_config -- json_config/common.sh@35 -- # [[ -n 2801268 ]] 00:04:13.997 11:22:54 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2801268 00:04:13.997 11:22:54 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:13.997 11:22:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:13.997 11:22:54 json_config -- json_config/common.sh@41 -- # kill -0 2801268 00:04:13.997 11:22:54 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:14.562 11:22:54 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:14.562 11:22:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.562 11:22:54 json_config -- json_config/common.sh@41 -- # kill -0 2801268 00:04:14.562 11:22:54 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:14.562 11:22:54 json_config -- json_config/common.sh@43 -- # break 00:04:14.562 11:22:54 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:14.562 11:22:54 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:14.562 SPDK target shutdown done 00:04:14.562 11:22:54 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:14.562 INFO: relaunching applications... 00:04:14.562 11:22:54 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.562 11:22:54 json_config -- json_config/common.sh@9 -- # local app=target 00:04:14.562 11:22:54 json_config -- json_config/common.sh@10 -- # shift 00:04:14.562 11:22:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:14.562 11:22:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:14.562 11:22:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:14.562 11:22:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.562 11:22:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.562 11:22:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2802474 00:04:14.562 11:22:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.562 11:22:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:14.562 Waiting for target to run... 00:04:14.562 11:22:54 json_config -- json_config/common.sh@25 -- # waitforlisten 2802474 /var/tmp/spdk_tgt.sock 00:04:14.562 11:22:54 json_config -- common/autotest_common.sh@835 -- # '[' -z 2802474 ']' 00:04:14.562 11:22:54 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:14.562 11:22:54 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.562 11:22:54 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:14.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:14.563 11:22:54 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.563 11:22:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.563 [2024-11-15 11:22:54.823030] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:14.563 [2024-11-15 11:22:54.823155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2802474 ] 00:04:15.127 [2024-11-15 11:22:55.399198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.127 [2024-11-15 11:22:55.450082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.409 [2024-11-15 11:22:58.500526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:18.409 [2024-11-15 11:22:58.532965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:18.975 11:22:59 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.975 11:22:59 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:18.975 11:22:59 json_config -- json_config/common.sh@26 -- # echo '' 00:04:18.975 00:04:18.975 11:22:59 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:18.975 11:22:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:18.975 INFO: Checking if target configuration is the same... 00:04:18.975 11:22:59 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.975 11:22:59 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:18.975 11:22:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.975 + '[' 2 -ne 2 ']' 00:04:18.975 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:18.975 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:18.975 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:18.975 +++ basename /dev/fd/62 00:04:18.975 ++ mktemp /tmp/62.XXX 00:04:18.975 + tmp_file_1=/tmp/62.9Ry 00:04:18.975 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.975 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:18.975 + tmp_file_2=/tmp/spdk_tgt_config.json.OV7 00:04:18.975 + ret=0 00:04:18.975 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:19.539 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:19.539 + diff -u /tmp/62.9Ry /tmp/spdk_tgt_config.json.OV7 00:04:19.539 + echo 'INFO: JSON config files are the same' 00:04:19.539 INFO: JSON config files are the same 00:04:19.539 + rm /tmp/62.9Ry /tmp/spdk_tgt_config.json.OV7 00:04:19.539 + exit 0 00:04:19.539 11:22:59 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:19.539 11:22:59 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:19.539 INFO: changing configuration and checking if this can be detected... 00:04:19.539 11:22:59 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:19.539 11:22:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:19.796 11:23:00 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.796 11:23:00 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:19.796 11:23:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.796 + '[' 2 -ne 2 ']' 00:04:19.796 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:19.796 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:19.796 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:19.796 +++ basename /dev/fd/62 00:04:19.796 ++ mktemp /tmp/62.XXX 00:04:19.796 + tmp_file_1=/tmp/62.sYc 00:04:19.796 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.796 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:19.796 + tmp_file_2=/tmp/spdk_tgt_config.json.fYC 00:04:19.796 + ret=0 00:04:19.797 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.053 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.311 + diff -u /tmp/62.sYc /tmp/spdk_tgt_config.json.fYC 00:04:20.311 + ret=1 00:04:20.311 + echo '=== Start of file: /tmp/62.sYc ===' 00:04:20.311 + cat /tmp/62.sYc 00:04:20.311 + echo '=== End of file: /tmp/62.sYc ===' 00:04:20.311 + echo '' 00:04:20.311 + echo '=== Start of file: /tmp/spdk_tgt_config.json.fYC ===' 00:04:20.311 + cat /tmp/spdk_tgt_config.json.fYC 00:04:20.311 + echo '=== End of file: /tmp/spdk_tgt_config.json.fYC ===' 00:04:20.311 + echo '' 00:04:20.311 + rm /tmp/62.sYc /tmp/spdk_tgt_config.json.fYC 00:04:20.311 + exit 1 00:04:20.311 11:23:00 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:20.311 INFO: configuration change detected. 00:04:20.311 11:23:00 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:20.311 11:23:00 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:20.311 11:23:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.311 11:23:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.311 11:23:00 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:20.311 11:23:00 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:20.311 11:23:00 json_config -- json_config/json_config.sh@324 -- # [[ -n 2802474 ]] 00:04:20.311 11:23:00 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:20.311 11:23:00 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:20.311 11:23:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.311 11:23:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.311 11:23:00 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:20.311 11:23:00 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:20.311 11:23:00 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:20.311 11:23:00 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:20.311 11:23:00 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:20.311 11:23:00 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:20.311 11:23:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:20.311 11:23:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.311 11:23:00 json_config -- json_config/json_config.sh@330 -- # killprocess 2802474 00:04:20.311 11:23:00 json_config -- common/autotest_common.sh@954 -- # '[' -z 2802474 ']' 00:04:20.311 11:23:00 json_config -- common/autotest_common.sh@958 -- # kill -0 2802474 00:04:20.311 11:23:00 json_config -- common/autotest_common.sh@959 -- # uname 00:04:20.311 11:23:00 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.311 11:23:00 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2802474 00:04:20.311 11:23:00 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.311 11:23:00 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.311 11:23:00 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2802474' 00:04:20.311 killing process with pid 2802474 00:04:20.311 11:23:00 json_config -- common/autotest_common.sh@973 -- # kill 2802474 00:04:20.311 11:23:00 json_config -- common/autotest_common.sh@978 -- # wait 2802474 00:04:22.211 11:23:02 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.211 11:23:02 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:22.211 11:23:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.211 11:23:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.211 11:23:02 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:22.211 11:23:02 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:22.211 INFO: Success 00:04:22.211 00:04:22.211 real 0m17.225s 00:04:22.211 user 0m19.035s 00:04:22.211 sys 0m2.651s 00:04:22.211 11:23:02 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.211 11:23:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.211 ************************************ 00:04:22.211 END TEST json_config 00:04:22.211 ************************************ 00:04:22.211 11:23:02 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:22.211 11:23:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.211 11:23:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.211 11:23:02 -- common/autotest_common.sh@10 -- # set +x 00:04:22.211 ************************************ 00:04:22.211 START TEST json_config_extra_key 00:04:22.211 ************************************ 00:04:22.211 11:23:02 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:22.211 11:23:02 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:22.211 11:23:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:22.211 11:23:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:22.211 11:23:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:22.211 11:23:02 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.212 11:23:02 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.212 11:23:02 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.212 11:23:02 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:22.212 11:23:02 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.212 11:23:02 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:22.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.212 --rc genhtml_branch_coverage=1 00:04:22.212 --rc genhtml_function_coverage=1 00:04:22.212 --rc genhtml_legend=1 00:04:22.212 --rc geninfo_all_blocks=1 00:04:22.212 --rc geninfo_unexecuted_blocks=1 00:04:22.212 00:04:22.212 ' 00:04:22.212 11:23:02 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:22.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.212 --rc genhtml_branch_coverage=1 00:04:22.212 --rc genhtml_function_coverage=1 00:04:22.212 --rc genhtml_legend=1 00:04:22.212 --rc geninfo_all_blocks=1 00:04:22.212 --rc geninfo_unexecuted_blocks=1 00:04:22.212 00:04:22.212 ' 00:04:22.212 11:23:02 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:22.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.212 --rc genhtml_branch_coverage=1 00:04:22.212 --rc genhtml_function_coverage=1 00:04:22.212 --rc genhtml_legend=1 00:04:22.212 --rc geninfo_all_blocks=1 00:04:22.212 --rc geninfo_unexecuted_blocks=1 00:04:22.212 00:04:22.212 ' 00:04:22.212 11:23:02 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:22.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.212 --rc genhtml_branch_coverage=1 00:04:22.212 --rc genhtml_function_coverage=1 00:04:22.212 --rc genhtml_legend=1 00:04:22.212 --rc geninfo_all_blocks=1 00:04:22.212 --rc geninfo_unexecuted_blocks=1 00:04:22.212 00:04:22.212 ' 00:04:22.212 11:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:22.212 11:23:02 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:22.212 11:23:02 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.212 11:23:02 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.212 11:23:02 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.212 11:23:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.212 11:23:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.212 11:23:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.212 11:23:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:22.212 11:23:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:22.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:22.212 11:23:02 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:22.212 11:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:22.212 11:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:22.212 11:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:22.212 11:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:22.212 11:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:22.212 11:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:22.212 11:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:22.212 11:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:22.212 11:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:22.212 11:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:22.212 11:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:22.212 INFO: launching applications... 00:04:22.212 11:23:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:22.212 11:23:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:22.212 11:23:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:22.212 11:23:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:22.212 11:23:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:22.212 11:23:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:22.212 11:23:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.212 11:23:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.212 11:23:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2803522 00:04:22.212 11:23:02 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:22.212 11:23:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:22.212 Waiting for target to run... 00:04:22.212 11:23:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2803522 /var/tmp/spdk_tgt.sock 00:04:22.212 11:23:02 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2803522 ']' 00:04:22.212 11:23:02 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:22.212 11:23:02 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.212 11:23:02 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:22.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:22.212 11:23:02 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.212 11:23:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:22.212 [2024-11-15 11:23:02.439315] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:22.212 [2024-11-15 11:23:02.439410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2803522 ] 00:04:22.471 [2024-11-15 11:23:02.787776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.471 [2024-11-15 11:23:02.829501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.036 11:23:03 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.036 11:23:03 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:23.036 11:23:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:23.036 00:04:23.036 11:23:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:23.036 INFO: shutting down applications... 00:04:23.036 11:23:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:23.037 11:23:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:23.037 11:23:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:23.037 11:23:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2803522 ]] 00:04:23.037 11:23:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2803522 00:04:23.037 11:23:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:23.037 11:23:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.037 11:23:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2803522 00:04:23.037 11:23:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:23.603 11:23:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:23.603 11:23:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.603 11:23:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2803522 00:04:23.603 11:23:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:23.603 11:23:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:23.603 11:23:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:23.603 11:23:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:23.603 SPDK target shutdown done 00:04:23.603 11:23:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:23.603 Success 00:04:23.603 00:04:23.603 real 0m1.669s 00:04:23.603 user 0m1.676s 00:04:23.603 sys 0m0.445s 00:04:23.603 11:23:03 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.603 11:23:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:23.603 ************************************ 00:04:23.603 END TEST json_config_extra_key 00:04:23.603 ************************************ 00:04:23.603 11:23:03 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:23.603 11:23:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.603 11:23:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.603 11:23:03 -- common/autotest_common.sh@10 -- # set +x 00:04:23.603 ************************************ 00:04:23.603 START TEST alias_rpc 00:04:23.603 ************************************ 00:04:23.603 11:23:03 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:23.603 * Looking for test storage... 00:04:23.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:23.603 11:23:04 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:23.603 11:23:04 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:23.603 11:23:04 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.861 11:23:04 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.861 11:23:04 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:23.861 11:23:04 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.861 11:23:04 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.861 --rc genhtml_branch_coverage=1 00:04:23.861 --rc genhtml_function_coverage=1 00:04:23.861 --rc genhtml_legend=1 00:04:23.861 --rc geninfo_all_blocks=1 00:04:23.861 --rc geninfo_unexecuted_blocks=1 00:04:23.861 00:04:23.861 ' 00:04:23.861 11:23:04 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.861 --rc genhtml_branch_coverage=1 00:04:23.861 --rc genhtml_function_coverage=1 00:04:23.861 --rc genhtml_legend=1 00:04:23.861 --rc geninfo_all_blocks=1 00:04:23.861 --rc geninfo_unexecuted_blocks=1 00:04:23.861 00:04:23.861 ' 00:04:23.861 11:23:04 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.861 --rc genhtml_branch_coverage=1 00:04:23.861 --rc genhtml_function_coverage=1 00:04:23.861 --rc genhtml_legend=1 00:04:23.861 --rc geninfo_all_blocks=1 00:04:23.861 --rc geninfo_unexecuted_blocks=1 00:04:23.861 00:04:23.861 ' 00:04:23.861 11:23:04 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.861 --rc genhtml_branch_coverage=1 00:04:23.861 --rc genhtml_function_coverage=1 00:04:23.861 --rc genhtml_legend=1 00:04:23.861 --rc geninfo_all_blocks=1 00:04:23.861 --rc geninfo_unexecuted_blocks=1 00:04:23.861 00:04:23.861 ' 00:04:23.861 11:23:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:23.861 11:23:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2803839 00:04:23.861 11:23:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.861 11:23:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2803839 00:04:23.861 11:23:04 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2803839 ']' 00:04:23.861 11:23:04 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.861 11:23:04 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.861 11:23:04 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.861 11:23:04 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.861 11:23:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.861 [2024-11-15 11:23:04.165914] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:23.861 [2024-11-15 11:23:04.165985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2803839 ] 00:04:23.861 [2024-11-15 11:23:04.229032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.861 [2024-11-15 11:23:04.285408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.119 11:23:04 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.119 11:23:04 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:24.119 11:23:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:24.683 11:23:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2803839 00:04:24.683 11:23:04 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2803839 ']' 00:04:24.683 11:23:04 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2803839 00:04:24.683 11:23:04 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:24.683 11:23:04 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.683 11:23:04 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2803839 00:04:24.683 11:23:04 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.683 11:23:04 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.683 11:23:04 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2803839' 00:04:24.683 killing process with pid 2803839 00:04:24.683 11:23:04 alias_rpc -- common/autotest_common.sh@973 -- # kill 2803839 00:04:24.683 11:23:04 alias_rpc -- common/autotest_common.sh@978 -- # wait 2803839 00:04:24.940 00:04:24.940 real 0m1.319s 00:04:24.940 user 0m1.431s 00:04:24.940 sys 0m0.429s 00:04:24.940 11:23:05 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.940 11:23:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.940 ************************************ 00:04:24.940 END TEST alias_rpc 00:04:24.940 ************************************ 00:04:24.940 11:23:05 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:24.940 11:23:05 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:24.940 11:23:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.940 11:23:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.940 11:23:05 -- common/autotest_common.sh@10 -- # set +x 00:04:24.940 ************************************ 00:04:24.940 START TEST spdkcli_tcp 00:04:24.940 ************************************ 00:04:24.940 11:23:05 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:25.198 * Looking for test storage... 00:04:25.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:25.198 11:23:05 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:25.198 11:23:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:25.198 11:23:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:25.198 11:23:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.198 11:23:05 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.199 11:23:05 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.199 11:23:05 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:25.199 11:23:05 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.199 11:23:05 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:25.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.199 --rc genhtml_branch_coverage=1 00:04:25.199 --rc genhtml_function_coverage=1 00:04:25.199 --rc genhtml_legend=1 00:04:25.199 --rc geninfo_all_blocks=1 00:04:25.199 --rc geninfo_unexecuted_blocks=1 00:04:25.199 00:04:25.199 ' 00:04:25.199 11:23:05 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:25.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.199 --rc genhtml_branch_coverage=1 00:04:25.199 --rc genhtml_function_coverage=1 00:04:25.199 --rc genhtml_legend=1 00:04:25.199 --rc geninfo_all_blocks=1 00:04:25.199 --rc geninfo_unexecuted_blocks=1 00:04:25.199 00:04:25.199 ' 00:04:25.199 11:23:05 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:25.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.199 --rc genhtml_branch_coverage=1 00:04:25.199 --rc genhtml_function_coverage=1 00:04:25.199 --rc genhtml_legend=1 00:04:25.199 --rc geninfo_all_blocks=1 00:04:25.199 --rc geninfo_unexecuted_blocks=1 00:04:25.199 00:04:25.199 ' 00:04:25.199 11:23:05 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:25.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.199 --rc genhtml_branch_coverage=1 00:04:25.199 --rc genhtml_function_coverage=1 00:04:25.199 --rc genhtml_legend=1 00:04:25.199 --rc geninfo_all_blocks=1 00:04:25.199 --rc geninfo_unexecuted_blocks=1 00:04:25.199 00:04:25.199 ' 00:04:25.199 11:23:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:25.199 11:23:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:25.199 11:23:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:25.199 11:23:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:25.199 11:23:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:25.199 11:23:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:25.199 11:23:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:25.199 11:23:05 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.199 11:23:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:25.199 11:23:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2804033 00:04:25.199 11:23:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:25.199 11:23:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2804033 00:04:25.199 11:23:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2804033 ']' 00:04:25.199 11:23:05 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.199 11:23:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.199 11:23:05 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.199 11:23:05 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.199 11:23:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:25.199 [2024-11-15 11:23:05.544772] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:25.199 [2024-11-15 11:23:05.544861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2804033 ] 00:04:25.199 [2024-11-15 11:23:05.611373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:25.457 [2024-11-15 11:23:05.670789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.457 [2024-11-15 11:23:05.670793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.715 11:23:05 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.715 11:23:05 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:25.715 11:23:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2804048 00:04:25.715 11:23:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:25.715 11:23:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:25.974 [ 00:04:25.974 "bdev_malloc_delete", 00:04:25.974 "bdev_malloc_create", 00:04:25.974 "bdev_null_resize", 00:04:25.974 "bdev_null_delete", 00:04:25.974 "bdev_null_create", 00:04:25.974 "bdev_nvme_cuse_unregister", 00:04:25.974 "bdev_nvme_cuse_register", 00:04:25.974 "bdev_opal_new_user", 00:04:25.974 "bdev_opal_set_lock_state", 00:04:25.974 "bdev_opal_delete", 00:04:25.974 "bdev_opal_get_info", 00:04:25.974 "bdev_opal_create", 00:04:25.974 "bdev_nvme_opal_revert", 00:04:25.974 "bdev_nvme_opal_init", 00:04:25.974 "bdev_nvme_send_cmd", 00:04:25.974 "bdev_nvme_set_keys", 00:04:25.974 "bdev_nvme_get_path_iostat", 00:04:25.974 "bdev_nvme_get_mdns_discovery_info", 00:04:25.974 "bdev_nvme_stop_mdns_discovery", 00:04:25.974 "bdev_nvme_start_mdns_discovery", 00:04:25.974 "bdev_nvme_set_multipath_policy", 00:04:25.974 "bdev_nvme_set_preferred_path", 00:04:25.974 "bdev_nvme_get_io_paths", 00:04:25.974 "bdev_nvme_remove_error_injection", 00:04:25.974 "bdev_nvme_add_error_injection", 00:04:25.974 "bdev_nvme_get_discovery_info", 00:04:25.974 "bdev_nvme_stop_discovery", 00:04:25.974 "bdev_nvme_start_discovery", 00:04:25.974 "bdev_nvme_get_controller_health_info", 00:04:25.974 "bdev_nvme_disable_controller", 00:04:25.974 "bdev_nvme_enable_controller", 00:04:25.974 "bdev_nvme_reset_controller", 00:04:25.974 "bdev_nvme_get_transport_statistics", 00:04:25.974 "bdev_nvme_apply_firmware", 00:04:25.974 "bdev_nvme_detach_controller", 00:04:25.974 "bdev_nvme_get_controllers", 00:04:25.974 "bdev_nvme_attach_controller", 00:04:25.974 "bdev_nvme_set_hotplug", 00:04:25.974 "bdev_nvme_set_options", 00:04:25.974 "bdev_passthru_delete", 00:04:25.974 "bdev_passthru_create", 00:04:25.974 "bdev_lvol_set_parent_bdev", 00:04:25.974 "bdev_lvol_set_parent", 00:04:25.974 "bdev_lvol_check_shallow_copy", 00:04:25.974 "bdev_lvol_start_shallow_copy", 00:04:25.974 "bdev_lvol_grow_lvstore", 00:04:25.974 "bdev_lvol_get_lvols", 00:04:25.974 "bdev_lvol_get_lvstores", 00:04:25.974 "bdev_lvol_delete", 00:04:25.974 "bdev_lvol_set_read_only", 00:04:25.974 "bdev_lvol_resize", 00:04:25.974 "bdev_lvol_decouple_parent", 00:04:25.974 "bdev_lvol_inflate", 00:04:25.974 "bdev_lvol_rename", 00:04:25.974 "bdev_lvol_clone_bdev", 00:04:25.974 "bdev_lvol_clone", 00:04:25.974 "bdev_lvol_snapshot", 00:04:25.974 "bdev_lvol_create", 00:04:25.974 "bdev_lvol_delete_lvstore", 00:04:25.974 "bdev_lvol_rename_lvstore", 00:04:25.974 "bdev_lvol_create_lvstore", 00:04:25.974 "bdev_raid_set_options", 00:04:25.974 "bdev_raid_remove_base_bdev", 00:04:25.974 "bdev_raid_add_base_bdev", 00:04:25.974 "bdev_raid_delete", 00:04:25.974 "bdev_raid_create", 00:04:25.974 "bdev_raid_get_bdevs", 00:04:25.974 "bdev_error_inject_error", 00:04:25.974 "bdev_error_delete", 00:04:25.974 "bdev_error_create", 00:04:25.974 "bdev_split_delete", 00:04:25.974 "bdev_split_create", 00:04:25.974 "bdev_delay_delete", 00:04:25.974 "bdev_delay_create", 00:04:25.974 "bdev_delay_update_latency", 00:04:25.974 "bdev_zone_block_delete", 00:04:25.974 "bdev_zone_block_create", 00:04:25.974 "blobfs_create", 00:04:25.974 "blobfs_detect", 00:04:25.974 "blobfs_set_cache_size", 00:04:25.974 "bdev_aio_delete", 00:04:25.974 "bdev_aio_rescan", 00:04:25.974 "bdev_aio_create", 00:04:25.974 "bdev_ftl_set_property", 00:04:25.974 "bdev_ftl_get_properties", 00:04:25.974 "bdev_ftl_get_stats", 00:04:25.974 "bdev_ftl_unmap", 00:04:25.974 "bdev_ftl_unload", 00:04:25.974 "bdev_ftl_delete", 00:04:25.974 "bdev_ftl_load", 00:04:25.974 "bdev_ftl_create", 00:04:25.974 "bdev_virtio_attach_controller", 00:04:25.974 "bdev_virtio_scsi_get_devices", 00:04:25.974 "bdev_virtio_detach_controller", 00:04:25.974 "bdev_virtio_blk_set_hotplug", 00:04:25.974 "bdev_iscsi_delete", 00:04:25.974 "bdev_iscsi_create", 00:04:25.974 "bdev_iscsi_set_options", 00:04:25.974 "accel_error_inject_error", 00:04:25.974 "ioat_scan_accel_module", 00:04:25.974 "dsa_scan_accel_module", 00:04:25.974 "iaa_scan_accel_module", 00:04:25.974 "vfu_virtio_create_fs_endpoint", 00:04:25.974 "vfu_virtio_create_scsi_endpoint", 00:04:25.974 "vfu_virtio_scsi_remove_target", 00:04:25.974 "vfu_virtio_scsi_add_target", 00:04:25.974 "vfu_virtio_create_blk_endpoint", 00:04:25.974 "vfu_virtio_delete_endpoint", 00:04:25.974 "keyring_file_remove_key", 00:04:25.974 "keyring_file_add_key", 00:04:25.974 "keyring_linux_set_options", 00:04:25.974 "fsdev_aio_delete", 00:04:25.974 "fsdev_aio_create", 00:04:25.974 "iscsi_get_histogram", 00:04:25.974 "iscsi_enable_histogram", 00:04:25.974 "iscsi_set_options", 00:04:25.974 "iscsi_get_auth_groups", 00:04:25.974 "iscsi_auth_group_remove_secret", 00:04:25.974 "iscsi_auth_group_add_secret", 00:04:25.974 "iscsi_delete_auth_group", 00:04:25.974 "iscsi_create_auth_group", 00:04:25.974 "iscsi_set_discovery_auth", 00:04:25.974 "iscsi_get_options", 00:04:25.974 "iscsi_target_node_request_logout", 00:04:25.974 "iscsi_target_node_set_redirect", 00:04:25.974 "iscsi_target_node_set_auth", 00:04:25.974 "iscsi_target_node_add_lun", 00:04:25.974 "iscsi_get_stats", 00:04:25.974 "iscsi_get_connections", 00:04:25.974 "iscsi_portal_group_set_auth", 00:04:25.974 "iscsi_start_portal_group", 00:04:25.974 "iscsi_delete_portal_group", 00:04:25.974 "iscsi_create_portal_group", 00:04:25.974 "iscsi_get_portal_groups", 00:04:25.974 "iscsi_delete_target_node", 00:04:25.974 "iscsi_target_node_remove_pg_ig_maps", 00:04:25.974 "iscsi_target_node_add_pg_ig_maps", 00:04:25.974 "iscsi_create_target_node", 00:04:25.974 "iscsi_get_target_nodes", 00:04:25.974 "iscsi_delete_initiator_group", 00:04:25.974 "iscsi_initiator_group_remove_initiators", 00:04:25.974 "iscsi_initiator_group_add_initiators", 00:04:25.974 "iscsi_create_initiator_group", 00:04:25.974 "iscsi_get_initiator_groups", 00:04:25.974 "nvmf_set_crdt", 00:04:25.974 "nvmf_set_config", 00:04:25.974 "nvmf_set_max_subsystems", 00:04:25.974 "nvmf_stop_mdns_prr", 00:04:25.974 "nvmf_publish_mdns_prr", 00:04:25.974 "nvmf_subsystem_get_listeners", 00:04:25.974 "nvmf_subsystem_get_qpairs", 00:04:25.974 "nvmf_subsystem_get_controllers", 00:04:25.974 "nvmf_get_stats", 00:04:25.974 "nvmf_get_transports", 00:04:25.974 "nvmf_create_transport", 00:04:25.974 "nvmf_get_targets", 00:04:25.974 "nvmf_delete_target", 00:04:25.974 "nvmf_create_target", 00:04:25.974 "nvmf_subsystem_allow_any_host", 00:04:25.974 "nvmf_subsystem_set_keys", 00:04:25.974 "nvmf_subsystem_remove_host", 00:04:25.974 "nvmf_subsystem_add_host", 00:04:25.974 "nvmf_ns_remove_host", 00:04:25.974 "nvmf_ns_add_host", 00:04:25.974 "nvmf_subsystem_remove_ns", 00:04:25.974 "nvmf_subsystem_set_ns_ana_group", 00:04:25.974 "nvmf_subsystem_add_ns", 00:04:25.974 "nvmf_subsystem_listener_set_ana_state", 00:04:25.974 "nvmf_discovery_get_referrals", 00:04:25.974 "nvmf_discovery_remove_referral", 00:04:25.974 "nvmf_discovery_add_referral", 00:04:25.974 "nvmf_subsystem_remove_listener", 00:04:25.974 "nvmf_subsystem_add_listener", 00:04:25.974 "nvmf_delete_subsystem", 00:04:25.974 "nvmf_create_subsystem", 00:04:25.974 "nvmf_get_subsystems", 00:04:25.974 "env_dpdk_get_mem_stats", 00:04:25.974 "nbd_get_disks", 00:04:25.974 "nbd_stop_disk", 00:04:25.974 "nbd_start_disk", 00:04:25.974 "ublk_recover_disk", 00:04:25.974 "ublk_get_disks", 00:04:25.974 "ublk_stop_disk", 00:04:25.974 "ublk_start_disk", 00:04:25.974 "ublk_destroy_target", 00:04:25.974 "ublk_create_target", 00:04:25.974 "virtio_blk_create_transport", 00:04:25.974 "virtio_blk_get_transports", 00:04:25.974 "vhost_controller_set_coalescing", 00:04:25.974 "vhost_get_controllers", 00:04:25.974 "vhost_delete_controller", 00:04:25.974 "vhost_create_blk_controller", 00:04:25.974 "vhost_scsi_controller_remove_target", 00:04:25.974 "vhost_scsi_controller_add_target", 00:04:25.974 "vhost_start_scsi_controller", 00:04:25.974 "vhost_create_scsi_controller", 00:04:25.974 "thread_set_cpumask", 00:04:25.974 "scheduler_set_options", 00:04:25.975 "framework_get_governor", 00:04:25.975 "framework_get_scheduler", 00:04:25.975 "framework_set_scheduler", 00:04:25.975 "framework_get_reactors", 00:04:25.975 "thread_get_io_channels", 00:04:25.975 "thread_get_pollers", 00:04:25.975 "thread_get_stats", 00:04:25.975 "framework_monitor_context_switch", 00:04:25.975 "spdk_kill_instance", 00:04:25.975 "log_enable_timestamps", 00:04:25.975 "log_get_flags", 00:04:25.975 "log_clear_flag", 00:04:25.975 "log_set_flag", 00:04:25.975 "log_get_level", 00:04:25.975 "log_set_level", 00:04:25.975 "log_get_print_level", 00:04:25.975 "log_set_print_level", 00:04:25.975 "framework_enable_cpumask_locks", 00:04:25.975 "framework_disable_cpumask_locks", 00:04:25.975 "framework_wait_init", 00:04:25.975 "framework_start_init", 00:04:25.975 "scsi_get_devices", 00:04:25.975 "bdev_get_histogram", 00:04:25.975 "bdev_enable_histogram", 00:04:25.975 "bdev_set_qos_limit", 00:04:25.975 "bdev_set_qd_sampling_period", 00:04:25.975 "bdev_get_bdevs", 00:04:25.975 "bdev_reset_iostat", 00:04:25.975 "bdev_get_iostat", 00:04:25.975 "bdev_examine", 00:04:25.975 "bdev_wait_for_examine", 00:04:25.975 "bdev_set_options", 00:04:25.975 "accel_get_stats", 00:04:25.975 "accel_set_options", 00:04:25.975 "accel_set_driver", 00:04:25.975 "accel_crypto_key_destroy", 00:04:25.975 "accel_crypto_keys_get", 00:04:25.975 "accel_crypto_key_create", 00:04:25.975 "accel_assign_opc", 00:04:25.975 "accel_get_module_info", 00:04:25.975 "accel_get_opc_assignments", 00:04:25.975 "vmd_rescan", 00:04:25.975 "vmd_remove_device", 00:04:25.975 "vmd_enable", 00:04:25.975 "sock_get_default_impl", 00:04:25.975 "sock_set_default_impl", 00:04:25.975 "sock_impl_set_options", 00:04:25.975 "sock_impl_get_options", 00:04:25.975 "iobuf_get_stats", 00:04:25.975 "iobuf_set_options", 00:04:25.975 "keyring_get_keys", 00:04:25.975 "vfu_tgt_set_base_path", 00:04:25.975 "framework_get_pci_devices", 00:04:25.975 "framework_get_config", 00:04:25.975 "framework_get_subsystems", 00:04:25.975 "fsdev_set_opts", 00:04:25.975 "fsdev_get_opts", 00:04:25.975 "trace_get_info", 00:04:25.975 "trace_get_tpoint_group_mask", 00:04:25.975 "trace_disable_tpoint_group", 00:04:25.975 "trace_enable_tpoint_group", 00:04:25.975 "trace_clear_tpoint_mask", 00:04:25.975 "trace_set_tpoint_mask", 00:04:25.975 "notify_get_notifications", 00:04:25.975 "notify_get_types", 00:04:25.975 "spdk_get_version", 00:04:25.975 "rpc_get_methods" 00:04:25.975 ] 00:04:25.975 11:23:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:25.975 11:23:06 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.975 11:23:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:25.975 11:23:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:25.975 11:23:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2804033 00:04:25.975 11:23:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2804033 ']' 00:04:25.975 11:23:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2804033 00:04:25.975 11:23:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:25.975 11:23:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.975 11:23:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2804033 00:04:25.975 11:23:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.975 11:23:06 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.975 11:23:06 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2804033' 00:04:25.975 killing process with pid 2804033 00:04:25.975 11:23:06 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2804033 00:04:25.975 11:23:06 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2804033 00:04:26.543 00:04:26.543 real 0m1.373s 00:04:26.543 user 0m2.449s 00:04:26.543 sys 0m0.462s 00:04:26.543 11:23:06 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.543 11:23:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:26.543 ************************************ 00:04:26.543 END TEST spdkcli_tcp 00:04:26.543 ************************************ 00:04:26.543 11:23:06 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:26.543 11:23:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.543 11:23:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.543 11:23:06 -- common/autotest_common.sh@10 -- # set +x 00:04:26.543 ************************************ 00:04:26.543 START TEST dpdk_mem_utility 00:04:26.543 ************************************ 00:04:26.543 11:23:06 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:26.543 * Looking for test storage... 00:04:26.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:26.543 11:23:06 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:26.543 11:23:06 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:26.543 11:23:06 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:26.543 11:23:06 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.543 11:23:06 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:26.543 11:23:06 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.543 11:23:06 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:26.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.543 --rc genhtml_branch_coverage=1 00:04:26.543 --rc genhtml_function_coverage=1 00:04:26.543 --rc genhtml_legend=1 00:04:26.543 --rc geninfo_all_blocks=1 00:04:26.543 --rc geninfo_unexecuted_blocks=1 00:04:26.543 00:04:26.543 ' 00:04:26.543 11:23:06 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:26.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.543 --rc genhtml_branch_coverage=1 00:04:26.543 --rc genhtml_function_coverage=1 00:04:26.543 --rc genhtml_legend=1 00:04:26.543 --rc geninfo_all_blocks=1 00:04:26.543 --rc geninfo_unexecuted_blocks=1 00:04:26.543 00:04:26.543 ' 00:04:26.543 11:23:06 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:26.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.543 --rc genhtml_branch_coverage=1 00:04:26.543 --rc genhtml_function_coverage=1 00:04:26.544 --rc genhtml_legend=1 00:04:26.544 --rc geninfo_all_blocks=1 00:04:26.544 --rc geninfo_unexecuted_blocks=1 00:04:26.544 00:04:26.544 ' 00:04:26.544 11:23:06 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:26.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.544 --rc genhtml_branch_coverage=1 00:04:26.544 --rc genhtml_function_coverage=1 00:04:26.544 --rc genhtml_legend=1 00:04:26.544 --rc geninfo_all_blocks=1 00:04:26.544 --rc geninfo_unexecuted_blocks=1 00:04:26.544 00:04:26.544 ' 00:04:26.544 11:23:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:26.544 11:23:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2804253 00:04:26.544 11:23:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:26.544 11:23:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2804253 00:04:26.544 11:23:06 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2804253 ']' 00:04:26.544 11:23:06 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.544 11:23:06 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.544 11:23:06 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.544 11:23:06 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.544 11:23:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:26.544 [2024-11-15 11:23:06.954010] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:26.544 [2024-11-15 11:23:06.954111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2804253 ] 00:04:26.849 [2024-11-15 11:23:07.023895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.849 [2024-11-15 11:23:07.084403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.127 11:23:07 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.127 11:23:07 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:27.127 11:23:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:27.127 11:23:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:27.127 11:23:07 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:27.127 11:23:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:27.127 { 00:04:27.127 "filename": "/tmp/spdk_mem_dump.txt" 00:04:27.127 } 00:04:27.127 11:23:07 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:27.127 11:23:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:27.127 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:27.127 1 heaps totaling size 810.000000 MiB 00:04:27.127 size: 810.000000 MiB heap id: 0 00:04:27.127 end heaps---------- 00:04:27.127 9 mempools totaling size 595.772034 MiB 00:04:27.127 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:27.127 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:27.127 size: 92.545471 MiB name: bdev_io_2804253 00:04:27.127 size: 50.003479 MiB name: msgpool_2804253 00:04:27.127 size: 36.509338 MiB name: fsdev_io_2804253 00:04:27.127 size: 21.763794 MiB name: PDU_Pool 00:04:27.127 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:27.127 size: 4.133484 MiB name: evtpool_2804253 00:04:27.127 size: 0.026123 MiB name: Session_Pool 00:04:27.127 end mempools------- 00:04:27.127 6 memzones totaling size 4.142822 MiB 00:04:27.127 size: 1.000366 MiB name: RG_ring_0_2804253 00:04:27.127 size: 1.000366 MiB name: RG_ring_1_2804253 00:04:27.127 size: 1.000366 MiB name: RG_ring_4_2804253 00:04:27.127 size: 1.000366 MiB name: RG_ring_5_2804253 00:04:27.127 size: 0.125366 MiB name: RG_ring_2_2804253 00:04:27.127 size: 0.015991 MiB name: RG_ring_3_2804253 00:04:27.127 end memzones------- 00:04:27.127 11:23:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:27.127 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:27.127 list of free elements. size: 10.862488 MiB 00:04:27.127 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:27.127 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:27.127 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:27.127 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:27.127 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:27.127 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:27.127 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:27.127 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:27.127 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:27.127 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:27.127 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:27.127 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:27.127 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:27.127 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:27.127 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:27.127 list of standard malloc elements. size: 199.218628 MiB 00:04:27.127 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:27.127 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:27.127 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:27.127 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:27.127 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:27.127 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:27.127 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:27.127 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:27.127 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:27.127 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:27.127 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:27.127 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:27.127 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:27.127 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:27.127 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:27.127 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:27.127 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:27.127 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:27.127 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:27.127 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:27.127 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:27.127 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:27.127 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:27.127 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:27.127 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:27.128 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:27.128 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:27.128 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:27.128 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:27.128 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:27.128 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:27.128 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:27.128 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:27.128 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:27.128 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:27.128 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:27.128 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:27.128 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:27.128 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:27.128 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:27.128 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:27.128 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:27.128 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:27.128 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:27.128 list of memzone associated elements. size: 599.918884 MiB 00:04:27.128 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:27.128 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:27.128 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:27.128 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:27.128 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:27.128 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2804253_0 00:04:27.128 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:27.128 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2804253_0 00:04:27.128 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:27.128 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2804253_0 00:04:27.128 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:27.128 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:27.128 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:27.128 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:27.128 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:27.128 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2804253_0 00:04:27.128 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:27.128 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2804253 00:04:27.128 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:27.128 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2804253 00:04:27.128 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:27.128 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:27.128 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:27.128 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:27.128 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:27.128 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:27.128 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:27.128 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:27.128 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:27.128 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2804253 00:04:27.128 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:27.128 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2804253 00:04:27.128 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:27.128 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2804253 00:04:27.128 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:27.128 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2804253 00:04:27.128 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:27.128 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2804253 00:04:27.128 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:27.128 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2804253 00:04:27.128 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:27.128 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:27.128 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:27.128 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:27.128 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:27.128 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:27.128 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:27.128 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2804253 00:04:27.128 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:27.128 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2804253 00:04:27.128 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:27.128 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:27.128 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:27.128 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:27.128 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:27.128 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2804253 00:04:27.128 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:27.128 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:27.128 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:27.128 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2804253 00:04:27.128 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:27.128 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2804253 00:04:27.128 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:27.128 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2804253 00:04:27.128 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:27.128 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:27.128 11:23:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:27.128 11:23:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2804253 00:04:27.128 11:23:07 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2804253 ']' 00:04:27.128 11:23:07 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2804253 00:04:27.128 11:23:07 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:27.128 11:23:07 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.128 11:23:07 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2804253 00:04:27.128 11:23:07 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.128 11:23:07 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.128 11:23:07 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2804253' 00:04:27.128 killing process with pid 2804253 00:04:27.128 11:23:07 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2804253 00:04:27.128 11:23:07 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2804253 00:04:27.694 00:04:27.694 real 0m1.182s 00:04:27.694 user 0m1.138s 00:04:27.694 sys 0m0.448s 00:04:27.694 11:23:07 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.694 11:23:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:27.694 ************************************ 00:04:27.694 END TEST dpdk_mem_utility 00:04:27.694 ************************************ 00:04:27.694 11:23:07 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:27.694 11:23:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.694 11:23:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.694 11:23:07 -- common/autotest_common.sh@10 -- # set +x 00:04:27.694 ************************************ 00:04:27.694 START TEST event 00:04:27.694 ************************************ 00:04:27.694 11:23:07 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:27.694 * Looking for test storage... 00:04:27.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:27.694 11:23:08 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:27.694 11:23:08 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:27.694 11:23:08 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:27.952 11:23:08 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:27.952 11:23:08 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.952 11:23:08 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.952 11:23:08 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.952 11:23:08 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.952 11:23:08 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.952 11:23:08 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.952 11:23:08 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.952 11:23:08 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.952 11:23:08 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.952 11:23:08 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.952 11:23:08 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.952 11:23:08 event -- scripts/common.sh@344 -- # case "$op" in 00:04:27.952 11:23:08 event -- scripts/common.sh@345 -- # : 1 00:04:27.952 11:23:08 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.952 11:23:08 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.952 11:23:08 event -- scripts/common.sh@365 -- # decimal 1 00:04:27.952 11:23:08 event -- scripts/common.sh@353 -- # local d=1 00:04:27.952 11:23:08 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.952 11:23:08 event -- scripts/common.sh@355 -- # echo 1 00:04:27.952 11:23:08 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.952 11:23:08 event -- scripts/common.sh@366 -- # decimal 2 00:04:27.953 11:23:08 event -- scripts/common.sh@353 -- # local d=2 00:04:27.953 11:23:08 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.953 11:23:08 event -- scripts/common.sh@355 -- # echo 2 00:04:27.953 11:23:08 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.953 11:23:08 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.953 11:23:08 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.953 11:23:08 event -- scripts/common.sh@368 -- # return 0 00:04:27.953 11:23:08 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.953 11:23:08 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:27.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.953 --rc genhtml_branch_coverage=1 00:04:27.953 --rc genhtml_function_coverage=1 00:04:27.953 --rc genhtml_legend=1 00:04:27.953 --rc geninfo_all_blocks=1 00:04:27.953 --rc geninfo_unexecuted_blocks=1 00:04:27.953 00:04:27.953 ' 00:04:27.953 11:23:08 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:27.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.953 --rc genhtml_branch_coverage=1 00:04:27.953 --rc genhtml_function_coverage=1 00:04:27.953 --rc genhtml_legend=1 00:04:27.953 --rc geninfo_all_blocks=1 00:04:27.953 --rc geninfo_unexecuted_blocks=1 00:04:27.953 00:04:27.953 ' 00:04:27.953 11:23:08 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:27.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.953 --rc genhtml_branch_coverage=1 00:04:27.953 --rc genhtml_function_coverage=1 00:04:27.953 --rc genhtml_legend=1 00:04:27.953 --rc geninfo_all_blocks=1 00:04:27.953 --rc geninfo_unexecuted_blocks=1 00:04:27.953 00:04:27.953 ' 00:04:27.953 11:23:08 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:27.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.953 --rc genhtml_branch_coverage=1 00:04:27.953 --rc genhtml_function_coverage=1 00:04:27.953 --rc genhtml_legend=1 00:04:27.953 --rc geninfo_all_blocks=1 00:04:27.953 --rc geninfo_unexecuted_blocks=1 00:04:27.953 00:04:27.953 ' 00:04:27.953 11:23:08 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:27.953 11:23:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:27.953 11:23:08 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:27.953 11:23:08 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:27.953 11:23:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.953 11:23:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:27.953 ************************************ 00:04:27.953 START TEST event_perf 00:04:27.953 ************************************ 00:04:27.953 11:23:08 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:27.953 Running I/O for 1 seconds...[2024-11-15 11:23:08.182551] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:27.953 [2024-11-15 11:23:08.182626] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2804453 ] 00:04:27.953 [2024-11-15 11:23:08.248264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:27.953 [2024-11-15 11:23:08.307824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.953 [2024-11-15 11:23:08.307886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:27.953 [2024-11-15 11:23:08.307995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:27.953 [2024-11-15 11:23:08.308004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.327 Running I/O for 1 seconds... 00:04:29.327 lcore 0: 230854 00:04:29.327 lcore 1: 230852 00:04:29.327 lcore 2: 230853 00:04:29.327 lcore 3: 230853 00:04:29.327 done. 00:04:29.327 00:04:29.327 real 0m1.206s 00:04:29.327 user 0m4.133s 00:04:29.327 sys 0m0.068s 00:04:29.327 11:23:09 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.327 11:23:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:29.327 ************************************ 00:04:29.327 END TEST event_perf 00:04:29.327 ************************************ 00:04:29.327 11:23:09 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:29.327 11:23:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:29.327 11:23:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.327 11:23:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:29.327 ************************************ 00:04:29.327 START TEST event_reactor 00:04:29.327 ************************************ 00:04:29.327 11:23:09 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:29.327 [2024-11-15 11:23:09.438554] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:29.327 [2024-11-15 11:23:09.438646] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2804611 ] 00:04:29.327 [2024-11-15 11:23:09.505826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.327 [2024-11-15 11:23:09.560038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.261 test_start 00:04:30.261 oneshot 00:04:30.261 tick 100 00:04:30.261 tick 100 00:04:30.261 tick 250 00:04:30.261 tick 100 00:04:30.261 tick 100 00:04:30.261 tick 100 00:04:30.261 tick 250 00:04:30.261 tick 500 00:04:30.261 tick 100 00:04:30.261 tick 100 00:04:30.261 tick 250 00:04:30.261 tick 100 00:04:30.261 tick 100 00:04:30.261 test_end 00:04:30.261 00:04:30.261 real 0m1.197s 00:04:30.261 user 0m1.129s 00:04:30.261 sys 0m0.064s 00:04:30.262 11:23:10 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.262 11:23:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:30.262 ************************************ 00:04:30.262 END TEST event_reactor 00:04:30.262 ************************************ 00:04:30.262 11:23:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:30.262 11:23:10 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:30.262 11:23:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.262 11:23:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.262 ************************************ 00:04:30.262 START TEST event_reactor_perf 00:04:30.262 ************************************ 00:04:30.262 11:23:10 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:30.262 [2024-11-15 11:23:10.685164] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:30.262 [2024-11-15 11:23:10.685232] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2804769 ] 00:04:30.520 [2024-11-15 11:23:10.753315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.520 [2024-11-15 11:23:10.810205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.455 test_start 00:04:31.455 test_end 00:04:31.455 Performance: 447964 events per second 00:04:31.455 00:04:31.455 real 0m1.200s 00:04:31.455 user 0m1.130s 00:04:31.455 sys 0m0.066s 00:04:31.455 11:23:11 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.455 11:23:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:31.455 ************************************ 00:04:31.455 END TEST event_reactor_perf 00:04:31.455 ************************************ 00:04:31.714 11:23:11 event -- event/event.sh@49 -- # uname -s 00:04:31.714 11:23:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:31.714 11:23:11 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:31.714 11:23:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.714 11:23:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.714 11:23:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.714 ************************************ 00:04:31.714 START TEST event_scheduler 00:04:31.714 ************************************ 00:04:31.714 11:23:11 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:31.714 * Looking for test storage... 00:04:31.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:31.714 11:23:11 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.714 11:23:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.714 11:23:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.714 11:23:12 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:31.714 11:23:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.715 11:23:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:31.715 11:23:12 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.715 11:23:12 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.715 11:23:12 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.715 11:23:12 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:31.715 11:23:12 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.715 11:23:12 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.715 --rc genhtml_branch_coverage=1 00:04:31.715 --rc genhtml_function_coverage=1 00:04:31.715 --rc genhtml_legend=1 00:04:31.715 --rc geninfo_all_blocks=1 00:04:31.715 --rc geninfo_unexecuted_blocks=1 00:04:31.715 00:04:31.715 ' 00:04:31.715 11:23:12 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.715 --rc genhtml_branch_coverage=1 00:04:31.715 --rc genhtml_function_coverage=1 00:04:31.715 --rc genhtml_legend=1 00:04:31.715 --rc geninfo_all_blocks=1 00:04:31.715 --rc geninfo_unexecuted_blocks=1 00:04:31.715 00:04:31.715 ' 00:04:31.715 11:23:12 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.715 --rc genhtml_branch_coverage=1 00:04:31.715 --rc genhtml_function_coverage=1 00:04:31.715 --rc genhtml_legend=1 00:04:31.715 --rc geninfo_all_blocks=1 00:04:31.715 --rc geninfo_unexecuted_blocks=1 00:04:31.715 00:04:31.715 ' 00:04:31.715 11:23:12 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.715 --rc genhtml_branch_coverage=1 00:04:31.715 --rc genhtml_function_coverage=1 00:04:31.715 --rc genhtml_legend=1 00:04:31.715 --rc geninfo_all_blocks=1 00:04:31.715 --rc geninfo_unexecuted_blocks=1 00:04:31.715 00:04:31.715 ' 00:04:31.715 11:23:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:31.715 11:23:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2805078 00:04:31.715 11:23:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:31.715 11:23:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.715 11:23:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2805078 00:04:31.715 11:23:12 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2805078 ']' 00:04:31.715 11:23:12 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.715 11:23:12 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.715 11:23:12 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.715 11:23:12 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.715 11:23:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:31.715 [2024-11-15 11:23:12.112339] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:31.715 [2024-11-15 11:23:12.112419] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2805078 ] 00:04:31.974 [2024-11-15 11:23:12.178099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:31.974 [2024-11-15 11:23:12.239942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.974 [2024-11-15 11:23:12.240049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.974 [2024-11-15 11:23:12.240120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:31.974 [2024-11-15 11:23:12.240124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:31.974 11:23:12 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.974 11:23:12 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:31.974 11:23:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:31.974 11:23:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.974 11:23:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:31.974 [2024-11-15 11:23:12.341167] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:31.974 [2024-11-15 11:23:12.341193] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:31.974 [2024-11-15 11:23:12.341210] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:31.974 [2024-11-15 11:23:12.341221] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:31.974 [2024-11-15 11:23:12.341230] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:31.974 11:23:12 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.974 11:23:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:31.974 11:23:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.974 11:23:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:32.234 [2024-11-15 11:23:12.444728] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:32.234 11:23:12 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.234 11:23:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:32.234 11:23:12 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.234 11:23:12 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.234 11:23:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:32.234 ************************************ 00:04:32.234 START TEST scheduler_create_thread 00:04:32.234 ************************************ 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.234 2 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.234 3 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.234 4 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.234 5 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.234 6 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.234 7 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.234 8 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.234 9 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.234 10 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.234 11:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.800 11:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.800 00:04:32.800 real 0m0.591s 00:04:32.800 user 0m0.009s 00:04:32.800 sys 0m0.005s 00:04:32.800 11:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.800 11:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.800 ************************************ 00:04:32.800 END TEST scheduler_create_thread 00:04:32.800 ************************************ 00:04:32.800 11:23:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:32.800 11:23:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2805078 00:04:32.800 11:23:13 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2805078 ']' 00:04:32.801 11:23:13 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2805078 00:04:32.801 11:23:13 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:32.801 11:23:13 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.801 11:23:13 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2805078 00:04:32.801 11:23:13 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:32.801 11:23:13 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:32.801 11:23:13 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2805078' 00:04:32.801 killing process with pid 2805078 00:04:32.801 11:23:13 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2805078 00:04:32.801 11:23:13 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2805078 00:04:33.367 [2024-11-15 11:23:13.544936] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:33.367 00:04:33.367 real 0m1.834s 00:04:33.367 user 0m2.480s 00:04:33.367 sys 0m0.336s 00:04:33.367 11:23:13 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.367 11:23:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:33.367 ************************************ 00:04:33.367 END TEST event_scheduler 00:04:33.367 ************************************ 00:04:33.367 11:23:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:33.367 11:23:13 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:33.367 11:23:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.367 11:23:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.367 11:23:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:33.625 ************************************ 00:04:33.625 START TEST app_repeat 00:04:33.625 ************************************ 00:04:33.626 11:23:13 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:33.626 11:23:13 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.626 11:23:13 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.626 11:23:13 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:33.626 11:23:13 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.626 11:23:13 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:33.626 11:23:13 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:33.626 11:23:13 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:33.626 11:23:13 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2805272 00:04:33.626 11:23:13 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:33.626 11:23:13 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.626 11:23:13 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2805272' 00:04:33.626 Process app_repeat pid: 2805272 00:04:33.626 11:23:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:33.626 11:23:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:33.626 spdk_app_start Round 0 00:04:33.626 11:23:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2805272 /var/tmp/spdk-nbd.sock 00:04:33.626 11:23:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2805272 ']' 00:04:33.626 11:23:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:33.626 11:23:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.626 11:23:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:33.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:33.626 11:23:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.626 11:23:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:33.626 [2024-11-15 11:23:13.836939] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:33.626 [2024-11-15 11:23:13.837002] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2805272 ] 00:04:33.626 [2024-11-15 11:23:13.900377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.626 [2024-11-15 11:23:13.955808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.626 [2024-11-15 11:23:13.955812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.884 11:23:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.884 11:23:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:33.884 11:23:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.142 Malloc0 00:04:34.142 11:23:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.405 Malloc1 00:04:34.405 11:23:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.405 11:23:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.405 11:23:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.405 11:23:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:34.405 11:23:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.405 11:23:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:34.405 11:23:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.405 11:23:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.405 11:23:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.405 11:23:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:34.405 11:23:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.405 11:23:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:34.405 11:23:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:34.405 11:23:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:34.406 11:23:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.406 11:23:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:34.664 /dev/nbd0 00:04:34.664 11:23:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:34.664 11:23:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:34.664 11:23:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:34.664 11:23:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:34.664 11:23:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:34.664 11:23:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:34.664 11:23:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:34.664 11:23:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:34.664 11:23:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:34.664 11:23:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:34.664 11:23:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.664 1+0 records in 00:04:34.664 1+0 records out 00:04:34.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200286 s, 20.5 MB/s 00:04:34.664 11:23:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.664 11:23:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:34.664 11:23:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.664 11:23:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:34.664 11:23:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:34.664 11:23:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.664 11:23:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.664 11:23:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:34.922 /dev/nbd1 00:04:34.922 11:23:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:34.922 11:23:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:34.922 11:23:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:34.922 11:23:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:34.922 11:23:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:34.922 11:23:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:34.922 11:23:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:34.922 11:23:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:34.922 11:23:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:34.922 11:23:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:34.922 11:23:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.922 1+0 records in 00:04:34.922 1+0 records out 00:04:34.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211469 s, 19.4 MB/s 00:04:34.922 11:23:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.922 11:23:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:34.922 11:23:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:34.922 11:23:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:34.922 11:23:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:34.922 11:23:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.922 11:23:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.180 11:23:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.180 11:23:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.180 11:23:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:35.439 { 00:04:35.439 "nbd_device": "/dev/nbd0", 00:04:35.439 "bdev_name": "Malloc0" 00:04:35.439 }, 00:04:35.439 { 00:04:35.439 "nbd_device": "/dev/nbd1", 00:04:35.439 "bdev_name": "Malloc1" 00:04:35.439 } 00:04:35.439 ]' 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:35.439 { 00:04:35.439 "nbd_device": "/dev/nbd0", 00:04:35.439 "bdev_name": "Malloc0" 00:04:35.439 }, 00:04:35.439 { 00:04:35.439 "nbd_device": "/dev/nbd1", 00:04:35.439 "bdev_name": "Malloc1" 00:04:35.439 } 00:04:35.439 ]' 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:35.439 /dev/nbd1' 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:35.439 /dev/nbd1' 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:35.439 256+0 records in 00:04:35.439 256+0 records out 00:04:35.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00518626 s, 202 MB/s 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:35.439 256+0 records in 00:04:35.439 256+0 records out 00:04:35.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201835 s, 52.0 MB/s 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:35.439 256+0 records in 00:04:35.439 256+0 records out 00:04:35.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219258 s, 47.8 MB/s 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.439 11:23:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:35.697 11:23:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:35.697 11:23:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:35.697 11:23:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:35.697 11:23:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.697 11:23:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.697 11:23:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:35.697 11:23:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.697 11:23:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.697 11:23:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.697 11:23:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:35.954 11:23:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:35.954 11:23:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:35.954 11:23:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:35.954 11:23:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.954 11:23:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.954 11:23:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:35.954 11:23:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.954 11:23:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.954 11:23:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.954 11:23:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.954 11:23:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.212 11:23:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:36.212 11:23:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:36.212 11:23:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.212 11:23:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:36.212 11:23:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:36.212 11:23:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.212 11:23:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:36.212 11:23:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:36.212 11:23:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:36.212 11:23:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:36.212 11:23:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:36.212 11:23:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:36.212 11:23:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:36.777 11:23:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:36.777 [2024-11-15 11:23:17.139771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.777 [2024-11-15 11:23:17.195235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.777 [2024-11-15 11:23:17.195238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.036 [2024-11-15 11:23:17.253438] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:37.036 [2024-11-15 11:23:17.253516] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:39.564 11:23:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:39.564 11:23:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:39.564 spdk_app_start Round 1 00:04:39.564 11:23:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2805272 /var/tmp/spdk-nbd.sock 00:04:39.564 11:23:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2805272 ']' 00:04:39.564 11:23:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:39.564 11:23:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.564 11:23:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:39.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:39.564 11:23:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.564 11:23:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.822 11:23:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.822 11:23:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:39.822 11:23:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.080 Malloc0 00:04:40.080 11:23:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.338 Malloc1 00:04:40.596 11:23:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.596 11:23:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.596 11:23:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.596 11:23:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:40.596 11:23:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.596 11:23:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:40.596 11:23:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.596 11:23:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.596 11:23:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.596 11:23:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:40.596 11:23:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.596 11:23:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:40.596 11:23:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:40.596 11:23:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:40.596 11:23:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.596 11:23:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:40.853 /dev/nbd0 00:04:40.853 11:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:40.853 11:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:40.853 11:23:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:40.853 11:23:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:40.853 11:23:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:40.853 11:23:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:40.853 11:23:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:40.853 11:23:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:40.853 11:23:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:40.853 11:23:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:40.853 11:23:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.853 1+0 records in 00:04:40.853 1+0 records out 00:04:40.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258859 s, 15.8 MB/s 00:04:40.853 11:23:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.853 11:23:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:40.853 11:23:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.853 11:23:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:40.853 11:23:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:40.853 11:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.853 11:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.853 11:23:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:41.112 /dev/nbd1 00:04:41.112 11:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:41.112 11:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:41.112 11:23:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:41.112 11:23:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:41.112 11:23:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:41.112 11:23:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:41.112 11:23:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:41.112 11:23:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:41.112 11:23:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:41.112 11:23:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:41.112 11:23:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.112 1+0 records in 00:04:41.112 1+0 records out 00:04:41.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022538 s, 18.2 MB/s 00:04:41.112 11:23:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.112 11:23:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:41.112 11:23:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.112 11:23:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:41.112 11:23:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:41.112 11:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.112 11:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.112 11:23:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.112 11:23:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.112 11:23:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:41.370 { 00:04:41.370 "nbd_device": "/dev/nbd0", 00:04:41.370 "bdev_name": "Malloc0" 00:04:41.370 }, 00:04:41.370 { 00:04:41.370 "nbd_device": "/dev/nbd1", 00:04:41.370 "bdev_name": "Malloc1" 00:04:41.370 } 00:04:41.370 ]' 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:41.370 { 00:04:41.370 "nbd_device": "/dev/nbd0", 00:04:41.370 "bdev_name": "Malloc0" 00:04:41.370 }, 00:04:41.370 { 00:04:41.370 "nbd_device": "/dev/nbd1", 00:04:41.370 "bdev_name": "Malloc1" 00:04:41.370 } 00:04:41.370 ]' 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:41.370 /dev/nbd1' 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:41.370 /dev/nbd1' 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:41.370 256+0 records in 00:04:41.370 256+0 records out 00:04:41.370 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00486648 s, 215 MB/s 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:41.370 256+0 records in 00:04:41.370 256+0 records out 00:04:41.370 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020151 s, 52.0 MB/s 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:41.370 256+0 records in 00:04:41.370 256+0 records out 00:04:41.370 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220422 s, 47.6 MB/s 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.370 11:23:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:41.628 11:23:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.628 11:23:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:41.628 11:23:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.628 11:23:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:41.628 11:23:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.628 11:23:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.628 11:23:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:41.628 11:23:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:41.628 11:23:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.628 11:23:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:41.886 11:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:41.886 11:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:41.886 11:23:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:41.886 11:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.886 11:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.886 11:23:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:41.886 11:23:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.886 11:23:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.886 11:23:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.886 11:23:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:42.144 11:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:42.144 11:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:42.144 11:23:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:42.144 11:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.144 11:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.144 11:23:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:42.144 11:23:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.144 11:23:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.144 11:23:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.144 11:23:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.144 11:23:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.401 11:23:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:42.401 11:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:42.401 11:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.401 11:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:42.401 11:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:42.401 11:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.401 11:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:42.401 11:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:42.401 11:23:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:42.401 11:23:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:42.401 11:23:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:42.401 11:23:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:42.401 11:23:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:42.659 11:23:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:42.916 [2024-11-15 11:23:23.216743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.916 [2024-11-15 11:23:23.271427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.916 [2024-11-15 11:23:23.271427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.916 [2024-11-15 11:23:23.332081] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:42.916 [2024-11-15 11:23:23.332148] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:46.194 11:23:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:46.194 11:23:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:46.194 spdk_app_start Round 2 00:04:46.194 11:23:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2805272 /var/tmp/spdk-nbd.sock 00:04:46.194 11:23:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2805272 ']' 00:04:46.194 11:23:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:46.194 11:23:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.194 11:23:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:46.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:46.194 11:23:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.194 11:23:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.194 11:23:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.194 11:23:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:46.194 11:23:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.194 Malloc0 00:04:46.194 11:23:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.452 Malloc1 00:04:46.452 11:23:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.452 11:23:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.452 11:23:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.452 11:23:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:46.452 11:23:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.452 11:23:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:46.452 11:23:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.452 11:23:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.452 11:23:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.452 11:23:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:46.452 11:23:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.452 11:23:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:46.452 11:23:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:46.452 11:23:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:46.452 11:23:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.452 11:23:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:47.017 /dev/nbd0 00:04:47.017 11:23:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:47.017 11:23:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:47.017 11:23:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:47.017 11:23:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:47.017 11:23:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:47.017 11:23:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:47.017 11:23:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:47.017 11:23:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:47.017 11:23:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:47.017 11:23:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:47.017 11:23:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.017 1+0 records in 00:04:47.017 1+0 records out 00:04:47.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021074 s, 19.4 MB/s 00:04:47.017 11:23:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.017 11:23:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:47.017 11:23:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.017 11:23:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:47.017 11:23:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:47.017 11:23:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.018 11:23:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.018 11:23:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:47.275 /dev/nbd1 00:04:47.275 11:23:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:47.276 11:23:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:47.276 11:23:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:47.276 11:23:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:47.276 11:23:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:47.276 11:23:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:47.276 11:23:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:47.276 11:23:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:47.276 11:23:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:47.276 11:23:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:47.276 11:23:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.276 1+0 records in 00:04:47.276 1+0 records out 00:04:47.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190662 s, 21.5 MB/s 00:04:47.276 11:23:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.276 11:23:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:47.276 11:23:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.276 11:23:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:47.276 11:23:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:47.276 11:23:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.276 11:23:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.276 11:23:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.276 11:23:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.276 11:23:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:47.533 { 00:04:47.533 "nbd_device": "/dev/nbd0", 00:04:47.533 "bdev_name": "Malloc0" 00:04:47.533 }, 00:04:47.533 { 00:04:47.533 "nbd_device": "/dev/nbd1", 00:04:47.533 "bdev_name": "Malloc1" 00:04:47.533 } 00:04:47.533 ]' 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:47.533 { 00:04:47.533 "nbd_device": "/dev/nbd0", 00:04:47.533 "bdev_name": "Malloc0" 00:04:47.533 }, 00:04:47.533 { 00:04:47.533 "nbd_device": "/dev/nbd1", 00:04:47.533 "bdev_name": "Malloc1" 00:04:47.533 } 00:04:47.533 ]' 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:47.533 /dev/nbd1' 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:47.533 /dev/nbd1' 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:47.533 256+0 records in 00:04:47.533 256+0 records out 00:04:47.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00512532 s, 205 MB/s 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:47.533 256+0 records in 00:04:47.533 256+0 records out 00:04:47.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206094 s, 50.9 MB/s 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:47.533 256+0 records in 00:04:47.533 256+0 records out 00:04:47.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217994 s, 48.1 MB/s 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.533 11:23:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:47.790 11:23:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:47.790 11:23:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:47.790 11:23:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:47.790 11:23:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.790 11:23:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.790 11:23:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:47.790 11:23:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.790 11:23:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.790 11:23:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.790 11:23:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:48.353 11:23:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:48.353 11:23:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:48.353 11:23:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:48.353 11:23:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.353 11:23:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.353 11:23:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:48.353 11:23:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.353 11:23:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.353 11:23:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.353 11:23:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.353 11:23:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.353 11:23:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:48.610 11:23:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:48.610 11:23:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.610 11:23:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:48.610 11:23:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:48.610 11:23:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.610 11:23:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:48.610 11:23:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:48.610 11:23:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:48.610 11:23:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:48.610 11:23:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:48.610 11:23:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:48.610 11:23:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:48.866 11:23:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:49.124 [2024-11-15 11:23:29.325985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.124 [2024-11-15 11:23:29.381800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.124 [2024-11-15 11:23:29.381804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.124 [2024-11-15 11:23:29.434859] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:49.124 [2024-11-15 11:23:29.434919] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:52.397 11:23:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2805272 /var/tmp/spdk-nbd.sock 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2805272 ']' 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:52.397 11:23:32 event.app_repeat -- event/event.sh@39 -- # killprocess 2805272 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2805272 ']' 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2805272 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2805272 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2805272' 00:04:52.397 killing process with pid 2805272 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2805272 00:04:52.397 11:23:32 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2805272 00:04:52.397 spdk_app_start is called in Round 0. 00:04:52.397 Shutdown signal received, stop current app iteration 00:04:52.398 Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 reinitialization... 00:04:52.398 spdk_app_start is called in Round 1. 00:04:52.398 Shutdown signal received, stop current app iteration 00:04:52.398 Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 reinitialization... 00:04:52.398 spdk_app_start is called in Round 2. 00:04:52.398 Shutdown signal received, stop current app iteration 00:04:52.398 Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 reinitialization... 00:04:52.398 spdk_app_start is called in Round 3. 00:04:52.398 Shutdown signal received, stop current app iteration 00:04:52.398 11:23:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:52.398 11:23:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:52.398 00:04:52.398 real 0m18.809s 00:04:52.398 user 0m41.587s 00:04:52.398 sys 0m3.227s 00:04:52.398 11:23:32 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.398 11:23:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.398 ************************************ 00:04:52.398 END TEST app_repeat 00:04:52.398 ************************************ 00:04:52.398 11:23:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:52.398 11:23:32 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:52.398 11:23:32 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.398 11:23:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.398 11:23:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.398 ************************************ 00:04:52.398 START TEST cpu_locks 00:04:52.398 ************************************ 00:04:52.398 11:23:32 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:52.398 * Looking for test storage... 00:04:52.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:52.398 11:23:32 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.398 11:23:32 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.398 11:23:32 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.398 11:23:32 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.398 11:23:32 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:52.398 11:23:32 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.398 11:23:32 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.398 --rc genhtml_branch_coverage=1 00:04:52.398 --rc genhtml_function_coverage=1 00:04:52.398 --rc genhtml_legend=1 00:04:52.398 --rc geninfo_all_blocks=1 00:04:52.398 --rc geninfo_unexecuted_blocks=1 00:04:52.398 00:04:52.398 ' 00:04:52.398 11:23:32 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.398 --rc genhtml_branch_coverage=1 00:04:52.398 --rc genhtml_function_coverage=1 00:04:52.398 --rc genhtml_legend=1 00:04:52.398 --rc geninfo_all_blocks=1 00:04:52.398 --rc geninfo_unexecuted_blocks=1 00:04:52.398 00:04:52.398 ' 00:04:52.398 11:23:32 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.398 --rc genhtml_branch_coverage=1 00:04:52.398 --rc genhtml_function_coverage=1 00:04:52.398 --rc genhtml_legend=1 00:04:52.398 --rc geninfo_all_blocks=1 00:04:52.398 --rc geninfo_unexecuted_blocks=1 00:04:52.398 00:04:52.398 ' 00:04:52.398 11:23:32 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.398 --rc genhtml_branch_coverage=1 00:04:52.398 --rc genhtml_function_coverage=1 00:04:52.398 --rc genhtml_legend=1 00:04:52.398 --rc geninfo_all_blocks=1 00:04:52.398 --rc geninfo_unexecuted_blocks=1 00:04:52.398 00:04:52.398 ' 00:04:52.398 11:23:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:52.398 11:23:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:52.398 11:23:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:52.398 11:23:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:52.398 11:23:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.398 11:23:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.398 11:23:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.655 ************************************ 00:04:52.655 START TEST default_locks 00:04:52.655 ************************************ 00:04:52.655 11:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:52.655 11:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2807755 00:04:52.655 11:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.655 11:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2807755 00:04:52.655 11:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2807755 ']' 00:04:52.655 11:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.655 11:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.655 11:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.656 11:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.656 11:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.656 [2024-11-15 11:23:32.902661] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:52.656 [2024-11-15 11:23:32.902753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2807755 ] 00:04:52.656 [2024-11-15 11:23:32.969474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.656 [2024-11-15 11:23:33.029987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.912 11:23:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.912 11:23:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:52.912 11:23:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2807755 00:04:52.912 11:23:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2807755 00:04:52.912 11:23:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:53.169 lslocks: write error 00:04:53.169 11:23:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2807755 00:04:53.169 11:23:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2807755 ']' 00:04:53.169 11:23:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2807755 00:04:53.169 11:23:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:53.169 11:23:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.169 11:23:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2807755 00:04:53.169 11:23:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.169 11:23:33 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.169 11:23:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2807755' 00:04:53.169 killing process with pid 2807755 00:04:53.169 11:23:33 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2807755 00:04:53.169 11:23:33 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2807755 00:04:53.735 11:23:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2807755 00:04:53.735 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:53.735 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2807755 00:04:53.735 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:53.735 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.735 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:53.735 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.735 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2807755 00:04:53.735 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2807755 ']' 00:04:53.735 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.735 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.735 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.736 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.736 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2807755) - No such process 00:04:53.736 ERROR: process (pid: 2807755) is no longer running 00:04:53.736 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.736 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:53.736 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:53.736 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:53.736 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:53.736 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:53.736 11:23:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:53.736 11:23:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:53.736 11:23:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:53.736 11:23:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:53.736 00:04:53.736 real 0m1.166s 00:04:53.736 user 0m1.134s 00:04:53.736 sys 0m0.494s 00:04:53.736 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.736 11:23:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.736 ************************************ 00:04:53.736 END TEST default_locks 00:04:53.736 ************************************ 00:04:53.736 11:23:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:53.736 11:23:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.736 11:23:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.736 11:23:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.736 ************************************ 00:04:53.736 START TEST default_locks_via_rpc 00:04:53.736 ************************************ 00:04:53.736 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:53.736 11:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2807925 00:04:53.736 11:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.736 11:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2807925 00:04:53.736 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2807925 ']' 00:04:53.736 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.736 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.736 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.736 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.736 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.736 [2024-11-15 11:23:34.120779] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:53.736 [2024-11-15 11:23:34.120873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2807925 ] 00:04:53.994 [2024-11-15 11:23:34.187567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.994 [2024-11-15 11:23:34.247396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.262 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.262 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:54.262 11:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:54.262 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.262 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.262 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.262 11:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:54.262 11:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:54.262 11:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:54.262 11:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:54.262 11:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:54.262 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.262 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.262 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.262 11:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2807925 00:04:54.262 11:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2807925 00:04:54.262 11:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:54.563 11:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2807925 00:04:54.563 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2807925 ']' 00:04:54.563 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2807925 00:04:54.563 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:54.563 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.563 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2807925 00:04:54.563 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.563 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.563 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2807925' 00:04:54.563 killing process with pid 2807925 00:04:54.563 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2807925 00:04:54.563 11:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2807925 00:04:54.830 00:04:54.830 real 0m1.169s 00:04:54.830 user 0m1.128s 00:04:54.830 sys 0m0.503s 00:04:54.830 11:23:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.830 11:23:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.830 ************************************ 00:04:54.830 END TEST default_locks_via_rpc 00:04:54.830 ************************************ 00:04:55.089 11:23:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:55.089 11:23:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.089 11:23:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.089 11:23:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.089 ************************************ 00:04:55.089 START TEST non_locking_app_on_locked_coremask 00:04:55.089 ************************************ 00:04:55.089 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:55.089 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2808087 00:04:55.089 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.089 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2808087 /var/tmp/spdk.sock 00:04:55.089 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2808087 ']' 00:04:55.089 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.089 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.089 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.089 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.089 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.089 [2024-11-15 11:23:35.343084] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:55.089 [2024-11-15 11:23:35.343184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808087 ] 00:04:55.089 [2024-11-15 11:23:35.408503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.089 [2024-11-15 11:23:35.470801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.348 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.348 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:55.348 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2808210 00:04:55.348 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:55.348 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2808210 /var/tmp/spdk2.sock 00:04:55.348 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2808210 ']' 00:04:55.348 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.348 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.348 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.348 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.348 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.607 [2024-11-15 11:23:35.800147] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:55.607 [2024-11-15 11:23:35.800236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808210 ] 00:04:55.607 [2024-11-15 11:23:35.897306] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:55.607 [2024-11-15 11:23:35.897344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.607 [2024-11-15 11:23:36.015399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.540 11:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.541 11:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:56.541 11:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2808087 00:04:56.541 11:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2808087 00:04:56.541 11:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:57.106 lslocks: write error 00:04:57.106 11:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2808087 00:04:57.106 11:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2808087 ']' 00:04:57.106 11:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2808087 00:04:57.106 11:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:57.106 11:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.106 11:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2808087 00:04:57.106 11:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.106 11:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.106 11:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2808087' 00:04:57.106 killing process with pid 2808087 00:04:57.106 11:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2808087 00:04:57.106 11:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2808087 00:04:58.040 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2808210 00:04:58.040 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2808210 ']' 00:04:58.040 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2808210 00:04:58.040 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:58.040 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.040 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2808210 00:04:58.040 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.040 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.040 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2808210' 00:04:58.040 killing process with pid 2808210 00:04:58.040 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2808210 00:04:58.040 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2808210 00:04:58.299 00:04:58.300 real 0m3.314s 00:04:58.300 user 0m3.550s 00:04:58.300 sys 0m1.063s 00:04:58.300 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.300 11:23:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.300 ************************************ 00:04:58.300 END TEST non_locking_app_on_locked_coremask 00:04:58.300 ************************************ 00:04:58.300 11:23:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:58.300 11:23:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.300 11:23:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.300 11:23:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.300 ************************************ 00:04:58.300 START TEST locking_app_on_unlocked_coremask 00:04:58.300 ************************************ 00:04:58.300 11:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:58.300 11:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2808521 00:04:58.300 11:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:58.300 11:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2808521 /var/tmp/spdk.sock 00:04:58.300 11:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2808521 ']' 00:04:58.300 11:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.300 11:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.300 11:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.300 11:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.300 11:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.300 [2024-11-15 11:23:38.707857] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:58.300 [2024-11-15 11:23:38.707963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808521 ] 00:04:58.558 [2024-11-15 11:23:38.772049] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:58.558 [2024-11-15 11:23:38.772086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.558 [2024-11-15 11:23:38.827022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.816 11:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.816 11:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:58.816 11:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2808652 00:04:58.816 11:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:58.816 11:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2808652 /var/tmp/spdk2.sock 00:04:58.816 11:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2808652 ']' 00:04:58.816 11:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:58.816 11:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.816 11:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:58.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:58.816 11:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.816 11:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.816 [2024-11-15 11:23:39.156036] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:04:58.816 [2024-11-15 11:23:39.156126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808652 ] 00:04:59.074 [2024-11-15 11:23:39.255415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.074 [2024-11-15 11:23:39.367997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.018 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.018 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:00.018 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2808652 00:05:00.018 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2808652 00:05:00.018 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.276 lslocks: write error 00:05:00.276 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2808521 00:05:00.276 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2808521 ']' 00:05:00.276 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2808521 00:05:00.276 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:00.276 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.276 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2808521 00:05:00.276 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.276 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.276 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2808521' 00:05:00.276 killing process with pid 2808521 00:05:00.276 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2808521 00:05:00.276 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2808521 00:05:01.210 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2808652 00:05:01.210 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2808652 ']' 00:05:01.210 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2808652 00:05:01.210 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:01.210 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.210 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2808652 00:05:01.210 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.210 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.210 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2808652' 00:05:01.210 killing process with pid 2808652 00:05:01.210 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2808652 00:05:01.210 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2808652 00:05:01.470 00:05:01.470 real 0m3.184s 00:05:01.470 user 0m3.418s 00:05:01.470 sys 0m1.027s 00:05:01.470 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.470 11:23:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.470 ************************************ 00:05:01.470 END TEST locking_app_on_unlocked_coremask 00:05:01.470 ************************************ 00:05:01.470 11:23:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:01.470 11:23:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.470 11:23:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.470 11:23:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.470 ************************************ 00:05:01.470 START TEST locking_app_on_locked_coremask 00:05:01.470 ************************************ 00:05:01.470 11:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:01.470 11:23:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2808957 00:05:01.470 11:23:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.470 11:23:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2808957 /var/tmp/spdk.sock 00:05:01.470 11:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2808957 ']' 00:05:01.470 11:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.470 11:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.470 11:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.470 11:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.470 11:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.729 [2024-11-15 11:23:41.943675] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:05:01.729 [2024-11-15 11:23:41.943776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808957 ] 00:05:01.729 [2024-11-15 11:23:42.007480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.729 [2024-11-15 11:23:42.062167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2809086 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2809086 /var/tmp/spdk2.sock 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2809086 /var/tmp/spdk2.sock 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2809086 /var/tmp/spdk2.sock 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2809086 ']' 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:01.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.987 11:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.987 [2024-11-15 11:23:42.390817] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:05:01.987 [2024-11-15 11:23:42.390907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2809086 ] 00:05:02.245 [2024-11-15 11:23:42.489183] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2808957 has claimed it. 00:05:02.245 [2024-11-15 11:23:42.489243] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:02.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2809086) - No such process 00:05:02.811 ERROR: process (pid: 2809086) is no longer running 00:05:02.811 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.811 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:02.811 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:02.811 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.811 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:02.811 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.811 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2808957 00:05:02.811 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2808957 00:05:02.811 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.070 lslocks: write error 00:05:03.070 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2808957 00:05:03.070 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2808957 ']' 00:05:03.070 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2808957 00:05:03.070 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:03.070 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.070 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2808957 00:05:03.070 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.070 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.070 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2808957' 00:05:03.070 killing process with pid 2808957 00:05:03.070 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2808957 00:05:03.070 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2808957 00:05:03.635 00:05:03.635 real 0m2.005s 00:05:03.635 user 0m2.220s 00:05:03.635 sys 0m0.628s 00:05:03.635 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.635 11:23:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.635 ************************************ 00:05:03.635 END TEST locking_app_on_locked_coremask 00:05:03.635 ************************************ 00:05:03.635 11:23:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:03.635 11:23:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.635 11:23:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.635 11:23:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.635 ************************************ 00:05:03.635 START TEST locking_overlapped_coremask 00:05:03.635 ************************************ 00:05:03.635 11:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:03.635 11:23:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2809255 00:05:03.635 11:23:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:03.635 11:23:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2809255 /var/tmp/spdk.sock 00:05:03.635 11:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2809255 ']' 00:05:03.635 11:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.635 11:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.635 11:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.635 11:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.635 11:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.635 [2024-11-15 11:23:43.996829] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:05:03.635 [2024-11-15 11:23:43.996932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2809255 ] 00:05:03.893 [2024-11-15 11:23:44.061982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:03.893 [2024-11-15 11:23:44.121253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.893 [2024-11-15 11:23:44.121316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.893 [2024-11-15 11:23:44.121324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2809282 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2809282 /var/tmp/spdk2.sock 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2809282 /var/tmp/spdk2.sock 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2809282 /var/tmp/spdk2.sock 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2809282 ']' 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.152 11:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.152 [2024-11-15 11:23:44.455886] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:05:04.152 [2024-11-15 11:23:44.455976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2809282 ] 00:05:04.152 [2024-11-15 11:23:44.561749] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2809255 has claimed it. 00:05:04.152 [2024-11-15 11:23:44.561821] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:05.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2809282) - No such process 00:05:05.092 ERROR: process (pid: 2809282) is no longer running 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2809255 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2809255 ']' 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2809255 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2809255 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2809255' 00:05:05.092 killing process with pid 2809255 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2809255 00:05:05.092 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2809255 00:05:05.350 00:05:05.350 real 0m1.708s 00:05:05.350 user 0m4.769s 00:05:05.350 sys 0m0.478s 00:05:05.350 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.350 11:23:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.350 ************************************ 00:05:05.350 END TEST locking_overlapped_coremask 00:05:05.350 ************************************ 00:05:05.350 11:23:45 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:05.350 11:23:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.350 11:23:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.350 11:23:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.350 ************************************ 00:05:05.350 START TEST locking_overlapped_coremask_via_rpc 00:05:05.350 ************************************ 00:05:05.350 11:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:05.350 11:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2809547 00:05:05.350 11:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:05.350 11:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2809547 /var/tmp/spdk.sock 00:05:05.350 11:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2809547 ']' 00:05:05.350 11:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.350 11:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.350 11:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.350 11:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.350 11:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.350 [2024-11-15 11:23:45.757264] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:05:05.350 [2024-11-15 11:23:45.757378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2809547 ] 00:05:05.609 [2024-11-15 11:23:45.828314] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:05.609 [2024-11-15 11:23:45.828353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:05.609 [2024-11-15 11:23:45.894339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.609 [2024-11-15 11:23:45.894364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.609 [2024-11-15 11:23:45.894368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.866 11:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.866 11:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:05.866 11:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2809566 00:05:05.866 11:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2809566 /var/tmp/spdk2.sock 00:05:05.866 11:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2809566 ']' 00:05:05.866 11:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.866 11:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.866 11:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:05.866 11:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.866 11:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.866 11:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.866 [2024-11-15 11:23:46.231276] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:05:05.866 [2024-11-15 11:23:46.231390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2809566 ] 00:05:06.124 [2024-11-15 11:23:46.334814] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:06.124 [2024-11-15 11:23:46.334845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:06.124 [2024-11-15 11:23:46.456893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.124 [2024-11-15 11:23:46.460357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:06.124 [2024-11-15 11:23:46.460361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.056 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.056 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.056 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.057 [2024-11-15 11:23:47.224422] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2809547 has claimed it. 00:05:07.057 request: 00:05:07.057 { 00:05:07.057 "method": "framework_enable_cpumask_locks", 00:05:07.057 "req_id": 1 00:05:07.057 } 00:05:07.057 Got JSON-RPC error response 00:05:07.057 response: 00:05:07.057 { 00:05:07.057 "code": -32603, 00:05:07.057 "message": "Failed to claim CPU core: 2" 00:05:07.057 } 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2809547 /var/tmp/spdk.sock 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2809547 ']' 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.057 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.315 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.315 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.315 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2809566 /var/tmp/spdk2.sock 00:05:07.315 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2809566 ']' 00:05:07.315 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.315 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.315 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.315 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.315 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.572 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.572 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.572 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:07.572 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:07.572 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:07.572 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:07.572 00:05:07.572 real 0m2.066s 00:05:07.572 user 0m1.120s 00:05:07.572 sys 0m0.178s 00:05:07.572 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.572 11:23:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.572 ************************************ 00:05:07.572 END TEST locking_overlapped_coremask_via_rpc 00:05:07.572 ************************************ 00:05:07.572 11:23:47 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:07.572 11:23:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2809547 ]] 00:05:07.572 11:23:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2809547 00:05:07.572 11:23:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2809547 ']' 00:05:07.573 11:23:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2809547 00:05:07.573 11:23:47 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:07.573 11:23:47 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.573 11:23:47 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2809547 00:05:07.573 11:23:47 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.573 11:23:47 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.573 11:23:47 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2809547' 00:05:07.573 killing process with pid 2809547 00:05:07.573 11:23:47 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2809547 00:05:07.573 11:23:47 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2809547 00:05:08.138 11:23:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2809566 ]] 00:05:08.138 11:23:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2809566 00:05:08.138 11:23:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2809566 ']' 00:05:08.138 11:23:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2809566 00:05:08.138 11:23:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:08.138 11:23:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.138 11:23:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2809566 00:05:08.138 11:23:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:08.138 11:23:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:08.138 11:23:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2809566' 00:05:08.138 killing process with pid 2809566 00:05:08.138 11:23:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2809566 00:05:08.138 11:23:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2809566 00:05:08.396 11:23:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:08.396 11:23:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:08.396 11:23:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2809547 ]] 00:05:08.396 11:23:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2809547 00:05:08.396 11:23:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2809547 ']' 00:05:08.396 11:23:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2809547 00:05:08.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2809547) - No such process 00:05:08.397 11:23:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2809547 is not found' 00:05:08.397 Process with pid 2809547 is not found 00:05:08.397 11:23:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2809566 ]] 00:05:08.397 11:23:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2809566 00:05:08.397 11:23:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2809566 ']' 00:05:08.397 11:23:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2809566 00:05:08.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2809566) - No such process 00:05:08.397 11:23:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2809566 is not found' 00:05:08.397 Process with pid 2809566 is not found 00:05:08.397 11:23:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:08.397 00:05:08.397 real 0m16.067s 00:05:08.397 user 0m29.024s 00:05:08.397 sys 0m5.345s 00:05:08.397 11:23:48 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.397 11:23:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.397 ************************************ 00:05:08.397 END TEST cpu_locks 00:05:08.397 ************************************ 00:05:08.397 00:05:08.397 real 0m40.776s 00:05:08.397 user 1m19.716s 00:05:08.397 sys 0m9.362s 00:05:08.397 11:23:48 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.397 11:23:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.397 ************************************ 00:05:08.397 END TEST event 00:05:08.397 ************************************ 00:05:08.397 11:23:48 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:08.397 11:23:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.397 11:23:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.397 11:23:48 -- common/autotest_common.sh@10 -- # set +x 00:05:08.397 ************************************ 00:05:08.397 START TEST thread 00:05:08.397 ************************************ 00:05:08.397 11:23:48 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:08.656 * Looking for test storage... 00:05:08.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:08.656 11:23:48 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.656 11:23:48 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.656 11:23:48 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.656 11:23:48 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.656 11:23:48 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.656 11:23:48 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.656 11:23:48 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.656 11:23:48 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.656 11:23:48 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.656 11:23:48 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.656 11:23:48 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.656 11:23:48 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.656 11:23:48 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.656 11:23:48 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.656 11:23:48 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.656 11:23:48 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:08.656 11:23:48 thread -- scripts/common.sh@345 -- # : 1 00:05:08.656 11:23:48 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.656 11:23:48 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.656 11:23:48 thread -- scripts/common.sh@365 -- # decimal 1 00:05:08.656 11:23:48 thread -- scripts/common.sh@353 -- # local d=1 00:05:08.656 11:23:48 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.656 11:23:48 thread -- scripts/common.sh@355 -- # echo 1 00:05:08.656 11:23:48 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.656 11:23:48 thread -- scripts/common.sh@366 -- # decimal 2 00:05:08.656 11:23:48 thread -- scripts/common.sh@353 -- # local d=2 00:05:08.656 11:23:48 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.656 11:23:48 thread -- scripts/common.sh@355 -- # echo 2 00:05:08.656 11:23:48 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.656 11:23:48 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.656 11:23:48 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.656 11:23:48 thread -- scripts/common.sh@368 -- # return 0 00:05:08.656 11:23:48 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.656 11:23:48 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.656 --rc genhtml_branch_coverage=1 00:05:08.656 --rc genhtml_function_coverage=1 00:05:08.656 --rc genhtml_legend=1 00:05:08.656 --rc geninfo_all_blocks=1 00:05:08.656 --rc geninfo_unexecuted_blocks=1 00:05:08.656 00:05:08.656 ' 00:05:08.656 11:23:48 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.656 --rc genhtml_branch_coverage=1 00:05:08.656 --rc genhtml_function_coverage=1 00:05:08.656 --rc genhtml_legend=1 00:05:08.656 --rc geninfo_all_blocks=1 00:05:08.656 --rc geninfo_unexecuted_blocks=1 00:05:08.656 00:05:08.656 ' 00:05:08.656 11:23:48 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.656 --rc genhtml_branch_coverage=1 00:05:08.656 --rc genhtml_function_coverage=1 00:05:08.656 --rc genhtml_legend=1 00:05:08.656 --rc geninfo_all_blocks=1 00:05:08.656 --rc geninfo_unexecuted_blocks=1 00:05:08.656 00:05:08.656 ' 00:05:08.656 11:23:48 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.656 --rc genhtml_branch_coverage=1 00:05:08.656 --rc genhtml_function_coverage=1 00:05:08.656 --rc genhtml_legend=1 00:05:08.656 --rc geninfo_all_blocks=1 00:05:08.656 --rc geninfo_unexecuted_blocks=1 00:05:08.656 00:05:08.656 ' 00:05:08.656 11:23:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:08.656 11:23:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:08.656 11:23:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.656 11:23:48 thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.656 ************************************ 00:05:08.656 START TEST thread_poller_perf 00:05:08.656 ************************************ 00:05:08.656 11:23:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:08.656 [2024-11-15 11:23:49.004493] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:05:08.656 [2024-11-15 11:23:49.004554] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2810054 ] 00:05:08.656 [2024-11-15 11:23:49.072254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.914 [2024-11-15 11:23:49.132278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.914 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:09.847 [2024-11-15T10:23:50.274Z] ====================================== 00:05:09.847 [2024-11-15T10:23:50.274Z] busy:2708580660 (cyc) 00:05:09.847 [2024-11-15T10:23:50.274Z] total_run_count: 363000 00:05:09.847 [2024-11-15T10:23:50.274Z] tsc_hz: 2700000000 (cyc) 00:05:09.847 [2024-11-15T10:23:50.274Z] ====================================== 00:05:09.847 [2024-11-15T10:23:50.274Z] poller_cost: 7461 (cyc), 2763 (nsec) 00:05:09.847 00:05:09.847 real 0m1.214s 00:05:09.847 user 0m1.143s 00:05:09.847 sys 0m0.065s 00:05:09.847 11:23:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.847 11:23:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.847 ************************************ 00:05:09.847 END TEST thread_poller_perf 00:05:09.847 ************************************ 00:05:09.847 11:23:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:09.847 11:23:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:09.847 11:23:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.847 11:23:50 thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.847 ************************************ 00:05:09.847 START TEST thread_poller_perf 00:05:09.847 ************************************ 00:05:09.847 11:23:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:09.847 [2024-11-15 11:23:50.269975] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:05:09.847 [2024-11-15 11:23:50.270035] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2810213 ] 00:05:10.105 [2024-11-15 11:23:50.336573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.105 [2024-11-15 11:23:50.400481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.105 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:11.478 [2024-11-15T10:23:51.905Z] ====================================== 00:05:11.478 [2024-11-15T10:23:51.905Z] busy:2702694786 (cyc) 00:05:11.478 [2024-11-15T10:23:51.905Z] total_run_count: 4795000 00:05:11.478 [2024-11-15T10:23:51.905Z] tsc_hz: 2700000000 (cyc) 00:05:11.478 [2024-11-15T10:23:51.905Z] ====================================== 00:05:11.478 [2024-11-15T10:23:51.905Z] poller_cost: 563 (cyc), 208 (nsec) 00:05:11.478 00:05:11.478 real 0m1.209s 00:05:11.478 user 0m1.139s 00:05:11.478 sys 0m0.066s 00:05:11.478 11:23:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.478 11:23:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:11.478 ************************************ 00:05:11.478 END TEST thread_poller_perf 00:05:11.478 ************************************ 00:05:11.478 11:23:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:11.478 00:05:11.478 real 0m2.673s 00:05:11.478 user 0m2.438s 00:05:11.478 sys 0m0.239s 00:05:11.478 11:23:51 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.478 11:23:51 thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.478 ************************************ 00:05:11.478 END TEST thread 00:05:11.478 ************************************ 00:05:11.478 11:23:51 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:11.478 11:23:51 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:11.478 11:23:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.478 11:23:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.478 11:23:51 -- common/autotest_common.sh@10 -- # set +x 00:05:11.478 ************************************ 00:05:11.478 START TEST app_cmdline 00:05:11.478 ************************************ 00:05:11.478 11:23:51 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:11.478 * Looking for test storage... 00:05:11.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:11.478 11:23:51 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:11.478 11:23:51 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:11.478 11:23:51 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:11.478 11:23:51 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.478 11:23:51 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:11.478 11:23:51 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.478 11:23:51 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:11.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.478 --rc genhtml_branch_coverage=1 00:05:11.478 --rc genhtml_function_coverage=1 00:05:11.478 --rc genhtml_legend=1 00:05:11.478 --rc geninfo_all_blocks=1 00:05:11.478 --rc geninfo_unexecuted_blocks=1 00:05:11.478 00:05:11.478 ' 00:05:11.478 11:23:51 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:11.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.478 --rc genhtml_branch_coverage=1 00:05:11.478 --rc genhtml_function_coverage=1 00:05:11.478 --rc genhtml_legend=1 00:05:11.478 --rc geninfo_all_blocks=1 00:05:11.478 --rc geninfo_unexecuted_blocks=1 00:05:11.478 00:05:11.478 ' 00:05:11.478 11:23:51 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:11.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.478 --rc genhtml_branch_coverage=1 00:05:11.478 --rc genhtml_function_coverage=1 00:05:11.478 --rc genhtml_legend=1 00:05:11.478 --rc geninfo_all_blocks=1 00:05:11.478 --rc geninfo_unexecuted_blocks=1 00:05:11.478 00:05:11.478 ' 00:05:11.478 11:23:51 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:11.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.478 --rc genhtml_branch_coverage=1 00:05:11.478 --rc genhtml_function_coverage=1 00:05:11.478 --rc genhtml_legend=1 00:05:11.478 --rc geninfo_all_blocks=1 00:05:11.478 --rc geninfo_unexecuted_blocks=1 00:05:11.478 00:05:11.478 ' 00:05:11.478 11:23:51 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:11.478 11:23:51 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2810423 00:05:11.478 11:23:51 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:11.478 11:23:51 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2810423 00:05:11.478 11:23:51 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2810423 ']' 00:05:11.478 11:23:51 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.478 11:23:51 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.478 11:23:51 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.478 11:23:51 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.478 11:23:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:11.478 [2024-11-15 11:23:51.734002] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:05:11.478 [2024-11-15 11:23:51.734102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2810423 ] 00:05:11.478 [2024-11-15 11:23:51.803361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.478 [2024-11-15 11:23:51.863935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.736 11:23:52 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.736 11:23:52 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:11.736 11:23:52 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:11.994 { 00:05:11.994 "version": "SPDK v25.01-pre git sha1 8531656d3", 00:05:11.994 "fields": { 00:05:11.994 "major": 25, 00:05:11.994 "minor": 1, 00:05:11.994 "patch": 0, 00:05:11.994 "suffix": "-pre", 00:05:11.994 "commit": "8531656d3" 00:05:11.994 } 00:05:11.994 } 00:05:11.994 11:23:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:11.994 11:23:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:11.994 11:23:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:11.994 11:23:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:11.994 11:23:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:11.994 11:23:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:11.994 11:23:52 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.994 11:23:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:11.994 11:23:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:11.994 11:23:52 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.253 11:23:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:12.253 11:23:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:12.253 11:23:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:12.253 11:23:52 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:12.253 11:23:52 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:12.253 11:23:52 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:12.253 11:23:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.253 11:23:52 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:12.253 11:23:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.253 11:23:52 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:12.253 11:23:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.253 11:23:52 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:12.253 11:23:52 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:12.253 11:23:52 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:12.511 request: 00:05:12.511 { 00:05:12.511 "method": "env_dpdk_get_mem_stats", 00:05:12.511 "req_id": 1 00:05:12.511 } 00:05:12.511 Got JSON-RPC error response 00:05:12.511 response: 00:05:12.511 { 00:05:12.511 "code": -32601, 00:05:12.511 "message": "Method not found" 00:05:12.511 } 00:05:12.511 11:23:52 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:12.511 11:23:52 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.511 11:23:52 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:12.511 11:23:52 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.511 11:23:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2810423 00:05:12.511 11:23:52 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2810423 ']' 00:05:12.511 11:23:52 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2810423 00:05:12.511 11:23:52 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:12.511 11:23:52 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.511 11:23:52 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2810423 00:05:12.511 11:23:52 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.511 11:23:52 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.511 11:23:52 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2810423' 00:05:12.511 killing process with pid 2810423 00:05:12.511 11:23:52 app_cmdline -- common/autotest_common.sh@973 -- # kill 2810423 00:05:12.511 11:23:52 app_cmdline -- common/autotest_common.sh@978 -- # wait 2810423 00:05:12.769 00:05:12.769 real 0m1.640s 00:05:12.769 user 0m2.010s 00:05:12.769 sys 0m0.497s 00:05:12.769 11:23:53 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.770 11:23:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:12.770 ************************************ 00:05:12.770 END TEST app_cmdline 00:05:12.770 ************************************ 00:05:13.029 11:23:53 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:13.029 11:23:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.029 11:23:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.029 11:23:53 -- common/autotest_common.sh@10 -- # set +x 00:05:13.029 ************************************ 00:05:13.029 START TEST version 00:05:13.029 ************************************ 00:05:13.029 11:23:53 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:13.029 * Looking for test storage... 00:05:13.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:13.029 11:23:53 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.029 11:23:53 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.029 11:23:53 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.029 11:23:53 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.029 11:23:53 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.029 11:23:53 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.029 11:23:53 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.029 11:23:53 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.029 11:23:53 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.029 11:23:53 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.029 11:23:53 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.029 11:23:53 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.029 11:23:53 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.029 11:23:53 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.029 11:23:53 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.029 11:23:53 version -- scripts/common.sh@344 -- # case "$op" in 00:05:13.029 11:23:53 version -- scripts/common.sh@345 -- # : 1 00:05:13.029 11:23:53 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.029 11:23:53 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.029 11:23:53 version -- scripts/common.sh@365 -- # decimal 1 00:05:13.029 11:23:53 version -- scripts/common.sh@353 -- # local d=1 00:05:13.029 11:23:53 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.029 11:23:53 version -- scripts/common.sh@355 -- # echo 1 00:05:13.029 11:23:53 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.029 11:23:53 version -- scripts/common.sh@366 -- # decimal 2 00:05:13.029 11:23:53 version -- scripts/common.sh@353 -- # local d=2 00:05:13.029 11:23:53 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.029 11:23:53 version -- scripts/common.sh@355 -- # echo 2 00:05:13.029 11:23:53 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.029 11:23:53 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.029 11:23:53 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.029 11:23:53 version -- scripts/common.sh@368 -- # return 0 00:05:13.029 11:23:53 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.029 11:23:53 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.029 --rc genhtml_branch_coverage=1 00:05:13.029 --rc genhtml_function_coverage=1 00:05:13.029 --rc genhtml_legend=1 00:05:13.029 --rc geninfo_all_blocks=1 00:05:13.029 --rc geninfo_unexecuted_blocks=1 00:05:13.029 00:05:13.029 ' 00:05:13.029 11:23:53 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.029 --rc genhtml_branch_coverage=1 00:05:13.029 --rc genhtml_function_coverage=1 00:05:13.029 --rc genhtml_legend=1 00:05:13.029 --rc geninfo_all_blocks=1 00:05:13.029 --rc geninfo_unexecuted_blocks=1 00:05:13.029 00:05:13.029 ' 00:05:13.029 11:23:53 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.029 --rc genhtml_branch_coverage=1 00:05:13.029 --rc genhtml_function_coverage=1 00:05:13.029 --rc genhtml_legend=1 00:05:13.029 --rc geninfo_all_blocks=1 00:05:13.029 --rc geninfo_unexecuted_blocks=1 00:05:13.029 00:05:13.029 ' 00:05:13.029 11:23:53 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.029 --rc genhtml_branch_coverage=1 00:05:13.029 --rc genhtml_function_coverage=1 00:05:13.029 --rc genhtml_legend=1 00:05:13.029 --rc geninfo_all_blocks=1 00:05:13.029 --rc geninfo_unexecuted_blocks=1 00:05:13.029 00:05:13.029 ' 00:05:13.029 11:23:53 version -- app/version.sh@17 -- # get_header_version major 00:05:13.029 11:23:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:13.029 11:23:53 version -- app/version.sh@14 -- # cut -f2 00:05:13.029 11:23:53 version -- app/version.sh@14 -- # tr -d '"' 00:05:13.029 11:23:53 version -- app/version.sh@17 -- # major=25 00:05:13.029 11:23:53 version -- app/version.sh@18 -- # get_header_version minor 00:05:13.029 11:23:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:13.029 11:23:53 version -- app/version.sh@14 -- # cut -f2 00:05:13.029 11:23:53 version -- app/version.sh@14 -- # tr -d '"' 00:05:13.029 11:23:53 version -- app/version.sh@18 -- # minor=1 00:05:13.029 11:23:53 version -- app/version.sh@19 -- # get_header_version patch 00:05:13.029 11:23:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:13.029 11:23:53 version -- app/version.sh@14 -- # cut -f2 00:05:13.029 11:23:53 version -- app/version.sh@14 -- # tr -d '"' 00:05:13.029 11:23:53 version -- app/version.sh@19 -- # patch=0 00:05:13.029 11:23:53 version -- app/version.sh@20 -- # get_header_version suffix 00:05:13.029 11:23:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:13.029 11:23:53 version -- app/version.sh@14 -- # cut -f2 00:05:13.029 11:23:53 version -- app/version.sh@14 -- # tr -d '"' 00:05:13.029 11:23:53 version -- app/version.sh@20 -- # suffix=-pre 00:05:13.029 11:23:53 version -- app/version.sh@22 -- # version=25.1 00:05:13.029 11:23:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:13.029 11:23:53 version -- app/version.sh@28 -- # version=25.1rc0 00:05:13.029 11:23:53 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:13.029 11:23:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:13.029 11:23:53 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:13.029 11:23:53 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:13.029 00:05:13.029 real 0m0.200s 00:05:13.029 user 0m0.136s 00:05:13.029 sys 0m0.090s 00:05:13.029 11:23:53 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.029 11:23:53 version -- common/autotest_common.sh@10 -- # set +x 00:05:13.029 ************************************ 00:05:13.029 END TEST version 00:05:13.029 ************************************ 00:05:13.029 11:23:53 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:13.029 11:23:53 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:13.029 11:23:53 -- spdk/autotest.sh@194 -- # uname -s 00:05:13.029 11:23:53 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:13.029 11:23:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:13.029 11:23:53 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:13.029 11:23:53 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:13.029 11:23:53 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:13.029 11:23:53 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:13.029 11:23:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.029 11:23:53 -- common/autotest_common.sh@10 -- # set +x 00:05:13.288 11:23:53 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:13.288 11:23:53 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:13.288 11:23:53 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:13.288 11:23:53 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:13.288 11:23:53 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:13.288 11:23:53 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:13.288 11:23:53 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:13.288 11:23:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:13.288 11:23:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.288 11:23:53 -- common/autotest_common.sh@10 -- # set +x 00:05:13.288 ************************************ 00:05:13.288 START TEST nvmf_tcp 00:05:13.288 ************************************ 00:05:13.288 11:23:53 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:13.288 * Looking for test storage... 00:05:13.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:13.288 11:23:53 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.288 11:23:53 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.288 11:23:53 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.288 11:23:53 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.288 11:23:53 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.288 11:23:53 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.288 11:23:53 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.288 11:23:53 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.288 11:23:53 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.288 11:23:53 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.288 11:23:53 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.288 11:23:53 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.288 11:23:53 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.288 11:23:53 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.288 11:23:53 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.289 11:23:53 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:13.289 11:23:53 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:13.289 11:23:53 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.289 11:23:53 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.289 11:23:53 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:13.289 11:23:53 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:13.289 11:23:53 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.289 11:23:53 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:13.289 11:23:53 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.289 11:23:53 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:13.289 11:23:53 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:13.289 11:23:53 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.289 11:23:53 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:13.289 11:23:53 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.289 11:23:53 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.289 11:23:53 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.289 11:23:53 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:13.289 11:23:53 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.289 11:23:53 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.289 --rc genhtml_branch_coverage=1 00:05:13.289 --rc genhtml_function_coverage=1 00:05:13.289 --rc genhtml_legend=1 00:05:13.289 --rc geninfo_all_blocks=1 00:05:13.289 --rc geninfo_unexecuted_blocks=1 00:05:13.289 00:05:13.289 ' 00:05:13.289 11:23:53 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.289 --rc genhtml_branch_coverage=1 00:05:13.289 --rc genhtml_function_coverage=1 00:05:13.289 --rc genhtml_legend=1 00:05:13.289 --rc geninfo_all_blocks=1 00:05:13.289 --rc geninfo_unexecuted_blocks=1 00:05:13.289 00:05:13.289 ' 00:05:13.289 11:23:53 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.289 --rc genhtml_branch_coverage=1 00:05:13.289 --rc genhtml_function_coverage=1 00:05:13.289 --rc genhtml_legend=1 00:05:13.289 --rc geninfo_all_blocks=1 00:05:13.289 --rc geninfo_unexecuted_blocks=1 00:05:13.289 00:05:13.289 ' 00:05:13.289 11:23:53 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.289 --rc genhtml_branch_coverage=1 00:05:13.289 --rc genhtml_function_coverage=1 00:05:13.289 --rc genhtml_legend=1 00:05:13.289 --rc geninfo_all_blocks=1 00:05:13.289 --rc geninfo_unexecuted_blocks=1 00:05:13.289 00:05:13.289 ' 00:05:13.289 11:23:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:13.289 11:23:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:13.289 11:23:53 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:13.289 11:23:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:13.289 11:23:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.289 11:23:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.289 ************************************ 00:05:13.289 START TEST nvmf_target_core 00:05:13.289 ************************************ 00:05:13.289 11:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:13.289 * Looking for test storage... 00:05:13.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:13.289 11:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.289 11:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.289 11:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.550 --rc genhtml_branch_coverage=1 00:05:13.550 --rc genhtml_function_coverage=1 00:05:13.550 --rc genhtml_legend=1 00:05:13.550 --rc geninfo_all_blocks=1 00:05:13.550 --rc geninfo_unexecuted_blocks=1 00:05:13.550 00:05:13.550 ' 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.550 --rc genhtml_branch_coverage=1 00:05:13.550 --rc genhtml_function_coverage=1 00:05:13.550 --rc genhtml_legend=1 00:05:13.550 --rc geninfo_all_blocks=1 00:05:13.550 --rc geninfo_unexecuted_blocks=1 00:05:13.550 00:05:13.550 ' 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.550 --rc genhtml_branch_coverage=1 00:05:13.550 --rc genhtml_function_coverage=1 00:05:13.550 --rc genhtml_legend=1 00:05:13.550 --rc geninfo_all_blocks=1 00:05:13.550 --rc geninfo_unexecuted_blocks=1 00:05:13.550 00:05:13.550 ' 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.550 --rc genhtml_branch_coverage=1 00:05:13.550 --rc genhtml_function_coverage=1 00:05:13.550 --rc genhtml_legend=1 00:05:13.550 --rc geninfo_all_blocks=1 00:05:13.550 --rc geninfo_unexecuted_blocks=1 00:05:13.550 00:05:13.550 ' 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.550 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:13.551 ************************************ 00:05:13.551 START TEST nvmf_abort 00:05:13.551 ************************************ 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:13.551 * Looking for test storage... 00:05:13.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.551 --rc genhtml_branch_coverage=1 00:05:13.551 --rc genhtml_function_coverage=1 00:05:13.551 --rc genhtml_legend=1 00:05:13.551 --rc geninfo_all_blocks=1 00:05:13.551 --rc geninfo_unexecuted_blocks=1 00:05:13.551 00:05:13.551 ' 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.551 --rc genhtml_branch_coverage=1 00:05:13.551 --rc genhtml_function_coverage=1 00:05:13.551 --rc genhtml_legend=1 00:05:13.551 --rc geninfo_all_blocks=1 00:05:13.551 --rc geninfo_unexecuted_blocks=1 00:05:13.551 00:05:13.551 ' 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.551 --rc genhtml_branch_coverage=1 00:05:13.551 --rc genhtml_function_coverage=1 00:05:13.551 --rc genhtml_legend=1 00:05:13.551 --rc geninfo_all_blocks=1 00:05:13.551 --rc geninfo_unexecuted_blocks=1 00:05:13.551 00:05:13.551 ' 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.551 --rc genhtml_branch_coverage=1 00:05:13.551 --rc genhtml_function_coverage=1 00:05:13.551 --rc genhtml_legend=1 00:05:13.551 --rc geninfo_all_blocks=1 00:05:13.551 --rc geninfo_unexecuted_blocks=1 00:05:13.551 00:05:13.551 ' 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.551 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.810 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.811 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.811 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:13.811 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:13.811 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:13.811 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:13.811 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:13.811 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:13.811 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:13.811 11:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:13.811 11:23:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:13.811 11:23:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:13.811 11:23:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:13.811 11:23:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:13.811 11:23:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:13.811 11:23:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:13.811 11:23:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:16.343 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:05:16.344 Found 0000:09:00.0 (0x8086 - 0x159b) 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:05:16.344 Found 0000:09:00.1 (0x8086 - 0x159b) 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:05:16.344 Found net devices under 0000:09:00.0: cvl_0_0 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:05:16.344 Found net devices under 0000:09:00.1: cvl_0_1 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:16.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:16.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:05:16.344 00:05:16.344 --- 10.0.0.2 ping statistics --- 00:05:16.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.344 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:16.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:16.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:05:16.344 00:05:16.344 --- 10.0.0.1 ping statistics --- 00:05:16.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:16.344 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2812510 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2812510 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2812510 ']' 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.344 [2024-11-15 11:23:56.372679] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:05:16.344 [2024-11-15 11:23:56.372758] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:16.344 [2024-11-15 11:23:56.441169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.344 [2024-11-15 11:23:56.497547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:16.344 [2024-11-15 11:23:56.497613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:16.344 [2024-11-15 11:23:56.497637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:16.344 [2024-11-15 11:23:56.497648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:16.344 [2024-11-15 11:23:56.497657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:16.344 [2024-11-15 11:23:56.499088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.344 [2024-11-15 11:23:56.499157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.344 [2024-11-15 11:23:56.499154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:16.344 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.345 [2024-11-15 11:23:56.643799] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.345 Malloc0 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.345 Delay0 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.345 [2024-11-15 11:23:56.714779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.345 11:23:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:16.603 [2024-11-15 11:23:56.789137] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:18.502 Initializing NVMe Controllers 00:05:18.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:18.502 controller IO queue size 128 less than required 00:05:18.502 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:18.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:18.502 Initialization complete. Launching workers. 00:05:18.502 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27589 00:05:18.502 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27650, failed to submit 62 00:05:18.502 success 27593, unsuccessful 57, failed 0 00:05:18.502 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:18.502 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.502 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:18.502 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.502 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:18.502 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:18.502 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:18.502 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:18.502 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:18.502 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:18.502 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:18.502 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:18.502 rmmod nvme_tcp 00:05:18.502 rmmod nvme_fabrics 00:05:18.503 rmmod nvme_keyring 00:05:18.760 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:18.760 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:18.760 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:18.760 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2812510 ']' 00:05:18.760 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2812510 00:05:18.760 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2812510 ']' 00:05:18.760 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2812510 00:05:18.760 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:18.760 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.760 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2812510 00:05:18.760 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:18.760 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:18.760 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2812510' 00:05:18.760 killing process with pid 2812510 00:05:18.760 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2812510 00:05:18.760 11:23:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2812510 00:05:19.018 11:23:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:19.018 11:23:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:19.018 11:23:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:19.018 11:23:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:19.018 11:23:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:19.018 11:23:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:19.018 11:23:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:19.018 11:23:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:19.018 11:23:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:19.018 11:23:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:19.018 11:23:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:19.018 11:23:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:20.942 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:20.942 00:05:20.942 real 0m7.427s 00:05:20.942 user 0m10.582s 00:05:20.942 sys 0m2.597s 00:05:20.942 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.942 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:20.942 ************************************ 00:05:20.942 END TEST nvmf_abort 00:05:20.942 ************************************ 00:05:20.942 11:24:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:20.942 11:24:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:20.942 11:24:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.942 11:24:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:20.942 ************************************ 00:05:20.942 START TEST nvmf_ns_hotplug_stress 00:05:20.942 ************************************ 00:05:20.942 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:20.942 * Looking for test storage... 00:05:21.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.201 --rc genhtml_branch_coverage=1 00:05:21.201 --rc genhtml_function_coverage=1 00:05:21.201 --rc genhtml_legend=1 00:05:21.201 --rc geninfo_all_blocks=1 00:05:21.201 --rc geninfo_unexecuted_blocks=1 00:05:21.201 00:05:21.201 ' 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.201 --rc genhtml_branch_coverage=1 00:05:21.201 --rc genhtml_function_coverage=1 00:05:21.201 --rc genhtml_legend=1 00:05:21.201 --rc geninfo_all_blocks=1 00:05:21.201 --rc geninfo_unexecuted_blocks=1 00:05:21.201 00:05:21.201 ' 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.201 --rc genhtml_branch_coverage=1 00:05:21.201 --rc genhtml_function_coverage=1 00:05:21.201 --rc genhtml_legend=1 00:05:21.201 --rc geninfo_all_blocks=1 00:05:21.201 --rc geninfo_unexecuted_blocks=1 00:05:21.201 00:05:21.201 ' 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.201 --rc genhtml_branch_coverage=1 00:05:21.201 --rc genhtml_function_coverage=1 00:05:21.201 --rc genhtml_legend=1 00:05:21.201 --rc geninfo_all_blocks=1 00:05:21.201 --rc geninfo_unexecuted_blocks=1 00:05:21.201 00:05:21.201 ' 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.201 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:21.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:21.202 11:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:05:23.762 Found 0000:09:00.0 (0x8086 - 0x159b) 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:05:23.762 Found 0000:09:00.1 (0x8086 - 0x159b) 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:23.762 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:05:23.763 Found net devices under 0000:09:00.0: cvl_0_0 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:05:23.763 Found net devices under 0000:09:00.1: cvl_0_1 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:23.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:23.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:05:23.763 00:05:23.763 --- 10.0.0.2 ping statistics --- 00:05:23.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:23.763 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:23.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:23.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:05:23.763 00:05:23.763 --- 10.0.0.1 ping statistics --- 00:05:23.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:23.763 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2814978 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2814978 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2814978 ']' 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.763 11:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:23.763 [2024-11-15 11:24:03.832262] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:05:23.763 [2024-11-15 11:24:03.832371] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:23.763 [2024-11-15 11:24:03.906624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.763 [2024-11-15 11:24:03.962927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:23.763 [2024-11-15 11:24:03.962983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:23.763 [2024-11-15 11:24:03.963006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:23.763 [2024-11-15 11:24:03.963018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:23.764 [2024-11-15 11:24:03.963028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:23.764 [2024-11-15 11:24:03.964474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.764 [2024-11-15 11:24:03.964534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.764 [2024-11-15 11:24:03.964538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.764 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.764 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:23.764 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:23.764 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.764 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:23.764 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:23.764 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:23.764 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:24.020 [2024-11-15 11:24:04.361923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.020 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:24.277 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:24.534 [2024-11-15 11:24:04.920732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:24.534 11:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:25.099 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:25.099 Malloc0 00:05:25.099 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:25.355 Delay0 00:05:25.613 11:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.870 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:26.127 NULL1 00:05:26.127 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:26.385 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2815287 00:05:26.385 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:26.385 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:26.385 11:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.757 Read completed with error (sct=0, sc=11) 00:05:27.757 11:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.757 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:27.757 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:28.015 true 00:05:28.015 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:28.015 11:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.947 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.205 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:29.205 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:29.462 true 00:05:29.462 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:29.462 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.719 11:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.977 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:29.977 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:30.234 true 00:05:30.234 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:30.234 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.492 11:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.749 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:30.749 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:31.006 true 00:05:31.006 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:31.006 11:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.937 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.501 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:32.501 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:32.501 true 00:05:32.501 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:32.501 11:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.759 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.017 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:33.017 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:33.274 true 00:05:33.274 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:33.274 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.531 11:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.096 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:34.096 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:34.096 true 00:05:34.354 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:34.354 11:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.288 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.545 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:35.545 11:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:35.803 true 00:05:35.803 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:35.804 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.062 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.320 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:36.320 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:36.578 true 00:05:36.578 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:36.578 11:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.509 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.509 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:37.509 11:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:37.767 true 00:05:37.767 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:37.767 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.024 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.589 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:38.589 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:38.589 true 00:05:38.589 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:38.589 11:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.522 11:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.780 11:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:39.780 11:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:40.038 true 00:05:40.038 11:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:40.038 11:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.295 11:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.553 11:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:40.553 11:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:40.811 true 00:05:40.811 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:40.811 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.743 11:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.001 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:42.001 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:42.259 true 00:05:42.259 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:42.259 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.516 11:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.774 11:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:42.774 11:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:43.031 true 00:05:43.031 11:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:43.031 11:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.964 11:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.964 11:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:43.964 11:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:44.222 true 00:05:44.222 11:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:44.479 11:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.735 11:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.992 11:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:44.992 11:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:45.250 true 00:05:45.250 11:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:45.250 11:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.508 11:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.767 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:45.767 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:46.074 true 00:05:46.074 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:46.074 11:24:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.034 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.292 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:47.292 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:47.549 true 00:05:47.549 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:47.549 11:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.806 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:48.063 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:48.321 true 00:05:48.321 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:48.321 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.578 11:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.835 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:48.835 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:49.093 true 00:05:49.093 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:49.093 11:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.025 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.282 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:50.282 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:50.542 true 00:05:50.542 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:50.542 11:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.800 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.058 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:51.058 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:51.315 true 00:05:51.315 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:51.315 11:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.248 11:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.506 11:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:52.506 11:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:52.763 true 00:05:52.763 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:52.763 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.020 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.278 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:53.278 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:53.536 true 00:05:53.536 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:53.536 11:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.468 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.468 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:54.468 11:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:54.725 true 00:05:54.725 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:54.725 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.289 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.289 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:55.289 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:55.545 true 00:05:55.545 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:55.545 11:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.803 11:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.060 11:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:56.060 11:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:56.318 true 00:05:56.576 11:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:56.576 11:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.509 Initializing NVMe Controllers 00:05:57.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:57.509 Controller IO queue size 128, less than required. 00:05:57.509 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:57.509 Controller IO queue size 128, less than required. 00:05:57.509 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:57.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:57.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:57.509 Initialization complete. Launching workers. 00:05:57.509 ======================================================== 00:05:57.509 Latency(us) 00:05:57.509 Device Information : IOPS MiB/s Average min max 00:05:57.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 682.90 0.33 90404.33 2841.66 1022218.70 00:05:57.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9514.73 4.65 13453.00 3187.52 537991.95 00:05:57.509 ======================================================== 00:05:57.509 Total : 10197.63 4.98 18606.16 2841.66 1022218.70 00:05:57.509 00:05:57.509 11:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.767 11:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:57.767 11:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:58.025 true 00:05:58.025 11:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815287 00:05:58.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2815287) - No such process 00:05:58.025 11:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2815287 00:05:58.025 11:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.282 11:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:58.540 11:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:58.540 11:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:58.540 11:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:58.540 11:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:58.540 11:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:58.798 null0 00:05:58.798 11:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:58.798 11:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:58.798 11:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:59.056 null1 00:05:59.056 11:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:59.056 11:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.056 11:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:59.313 null2 00:05:59.313 11:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:59.313 11:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.313 11:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:59.571 null3 00:05:59.571 11:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:59.571 11:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.571 11:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:59.828 null4 00:05:59.828 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:59.828 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:59.828 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:00.085 null5 00:06:00.085 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.085 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.085 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:00.343 null6 00:06:00.343 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.343 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.343 11:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:00.601 null7 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.859 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2819990 2819991 2819993 2819998 2820000 2820002 2820004 2820006 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.860 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.117 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.118 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.118 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.118 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.118 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.118 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:01.118 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:01.118 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.376 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:01.634 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:01.634 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.634 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.634 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:01.634 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:01.634 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.634 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:01.634 11:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.893 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.894 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.894 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:01.894 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:02.152 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.152 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.152 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.152 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.152 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.152 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.152 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:02.152 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:02.718 11:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:02.976 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:02.976 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:02.976 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.976 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:02.976 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:02.976 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:02.976 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:02.976 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.234 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.234 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.234 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.234 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.234 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.234 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.234 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.234 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.234 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.234 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.234 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.234 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.234 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.234 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.235 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.235 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.235 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.235 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.235 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.235 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.235 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.235 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.235 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.235 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.492 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.492 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.493 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.493 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.493 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.493 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.493 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.493 11:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.750 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.751 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.751 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.751 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.751 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.751 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.751 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.008 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.008 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.008 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.008 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.009 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.009 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.009 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.009 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.266 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.829 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.829 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.829 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.829 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.829 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.829 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.829 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.829 11:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.086 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.344 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.344 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.344 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.344 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.344 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.344 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.344 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.344 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.602 11:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.860 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.860 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.860 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.860 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.860 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.860 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.860 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.860 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.118 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.376 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.376 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.377 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.377 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.634 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.634 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.634 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.634 11:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:06.892 rmmod nvme_tcp 00:06:06.892 rmmod nvme_fabrics 00:06:06.892 rmmod nvme_keyring 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2814978 ']' 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2814978 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2814978 ']' 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2814978 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2814978 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2814978' 00:06:06.892 killing process with pid 2814978 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2814978 00:06:06.892 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2814978 00:06:07.151 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:07.151 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:07.151 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:07.151 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:07.151 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:07.151 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:07.151 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:07.151 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:07.151 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:07.151 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.151 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.151 11:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:09.684 00:06:09.684 real 0m48.182s 00:06:09.684 user 3m43.684s 00:06:09.684 sys 0m16.219s 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:09.684 ************************************ 00:06:09.684 END TEST nvmf_ns_hotplug_stress 00:06:09.684 ************************************ 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:09.684 ************************************ 00:06:09.684 START TEST nvmf_delete_subsystem 00:06:09.684 ************************************ 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:09.684 * Looking for test storage... 00:06:09.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:09.684 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.685 --rc genhtml_branch_coverage=1 00:06:09.685 --rc genhtml_function_coverage=1 00:06:09.685 --rc genhtml_legend=1 00:06:09.685 --rc geninfo_all_blocks=1 00:06:09.685 --rc geninfo_unexecuted_blocks=1 00:06:09.685 00:06:09.685 ' 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.685 --rc genhtml_branch_coverage=1 00:06:09.685 --rc genhtml_function_coverage=1 00:06:09.685 --rc genhtml_legend=1 00:06:09.685 --rc geninfo_all_blocks=1 00:06:09.685 --rc geninfo_unexecuted_blocks=1 00:06:09.685 00:06:09.685 ' 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.685 --rc genhtml_branch_coverage=1 00:06:09.685 --rc genhtml_function_coverage=1 00:06:09.685 --rc genhtml_legend=1 00:06:09.685 --rc geninfo_all_blocks=1 00:06:09.685 --rc geninfo_unexecuted_blocks=1 00:06:09.685 00:06:09.685 ' 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.685 --rc genhtml_branch_coverage=1 00:06:09.685 --rc genhtml_function_coverage=1 00:06:09.685 --rc genhtml_legend=1 00:06:09.685 --rc geninfo_all_blocks=1 00:06:09.685 --rc geninfo_unexecuted_blocks=1 00:06:09.685 00:06:09.685 ' 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:09.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:09.685 11:24:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:11.643 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:11.643 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:11.643 Found net devices under 0000:09:00.0: cvl_0_0 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:11.643 Found net devices under 0000:09:00.1: cvl_0_1 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:11.643 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:11.644 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:11.644 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:11.644 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:11.644 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:11.644 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:11.644 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:11.644 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:11.644 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:11.644 11:24:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:11.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:11.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:06:11.644 00:06:11.644 --- 10.0.0.2 ping statistics --- 00:06:11.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:11.644 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:11.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:11.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:06:11.644 00:06:11.644 --- 10.0.0.1 ping statistics --- 00:06:11.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:11.644 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2822901 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2822901 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2822901 ']' 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.644 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.902 [2024-11-15 11:24:52.090979] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:06:11.902 [2024-11-15 11:24:52.091067] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:11.902 [2024-11-15 11:24:52.162410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.902 [2024-11-15 11:24:52.221517] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:11.902 [2024-11-15 11:24:52.221568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:11.902 [2024-11-15 11:24:52.221597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:11.902 [2024-11-15 11:24:52.221608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:11.902 [2024-11-15 11:24:52.221617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:11.902 [2024-11-15 11:24:52.226324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.902 [2024-11-15 11:24:52.226335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.161 [2024-11-15 11:24:52.379203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.161 [2024-11-15 11:24:52.395444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.161 NULL1 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.161 Delay0 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2822923 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:12.161 11:24:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:12.161 [2024-11-15 11:24:52.480248] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:14.061 11:24:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:14.061 11:24:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.061 11:24:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:14.319 Read completed with error (sct=0, sc=8) 00:06:14.319 Write completed with error (sct=0, sc=8) 00:06:14.319 Write completed with error (sct=0, sc=8) 00:06:14.319 Write completed with error (sct=0, sc=8) 00:06:14.319 starting I/O failed: -6 00:06:14.319 Write completed with error (sct=0, sc=8) 00:06:14.319 Read completed with error (sct=0, sc=8) 00:06:14.319 Read completed with error (sct=0, sc=8) 00:06:14.319 Read completed with error (sct=0, sc=8) 00:06:14.319 starting I/O failed: -6 00:06:14.319 Read completed with error (sct=0, sc=8) 00:06:14.319 Read completed with error (sct=0, sc=8) 00:06:14.319 Read completed with error (sct=0, sc=8) 00:06:14.319 Write completed with error (sct=0, sc=8) 00:06:14.319 starting I/O failed: -6 00:06:14.319 Write completed with error (sct=0, sc=8) 00:06:14.319 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 [2024-11-15 11:24:54.562312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f418000d4d0 is same with the state(6) to be set 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 starting I/O failed: -6 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 [2024-11-15 11:24:54.562921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x927860 is same with the state(6) to be set 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 Read completed with error (sct=0, sc=8) 00:06:14.320 Write completed with error (sct=0, sc=8) 00:06:14.320 [2024-11-15 11:24:54.563440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4180000c40 is same with the state(6) to be set 00:06:14.321 Read completed with error (sct=0, sc=8) 00:06:14.321 Read completed with error (sct=0, sc=8) 00:06:14.321 Read completed with error (sct=0, sc=8) 00:06:14.321 Write completed with error (sct=0, sc=8) 00:06:14.321 Read completed with error (sct=0, sc=8) 00:06:14.321 Read completed with error (sct=0, sc=8) 00:06:15.255 [2024-11-15 11:24:55.535191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9289a0 is same with the state(6) to be set 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 [2024-11-15 11:24:55.564878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x927680 is same with the state(6) to be set 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 [2024-11-15 11:24:55.565094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9272c0 is same with the state(6) to be set 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 [2024-11-15 11:24:55.566164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f418000d020 is same with the state(6) to be set 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Read completed with error (sct=0, sc=8) 00:06:15.255 Write completed with error (sct=0, sc=8) 00:06:15.256 Read completed with error (sct=0, sc=8) 00:06:15.256 Read completed with error (sct=0, sc=8) 00:06:15.256 Read completed with error (sct=0, sc=8) 00:06:15.256 Read completed with error (sct=0, sc=8) 00:06:15.256 Read completed with error (sct=0, sc=8) 00:06:15.256 Read completed with error (sct=0, sc=8) 00:06:15.256 Write completed with error (sct=0, sc=8) 00:06:15.256 Read completed with error (sct=0, sc=8) 00:06:15.256 Read completed with error (sct=0, sc=8) 00:06:15.256 Write completed with error (sct=0, sc=8) 00:06:15.256 Read completed with error (sct=0, sc=8) 00:06:15.256 Write completed with error (sct=0, sc=8) 00:06:15.256 Read completed with error (sct=0, sc=8) 00:06:15.256 Read completed with error (sct=0, sc=8) 00:06:15.256 Read completed with error (sct=0, sc=8) 00:06:15.256 Read completed with error (sct=0, sc=8) 00:06:15.256 Read completed with error (sct=0, sc=8) 00:06:15.256 [2024-11-15 11:24:55.566370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f418000d800 is same with the state(6) to be set 00:06:15.256 Initializing NVMe Controllers 00:06:15.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:15.256 Controller IO queue size 128, less than required. 00:06:15.256 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:15.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:15.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:15.256 Initialization complete. Launching workers. 00:06:15.256 ======================================================== 00:06:15.256 Latency(us) 00:06:15.256 Device Information : IOPS MiB/s Average min max 00:06:15.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.19 0.08 890981.23 611.87 1012321.30 00:06:15.256 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.75 0.08 906679.58 933.06 1012793.27 00:06:15.256 ======================================================== 00:06:15.256 Total : 337.94 0.17 898634.46 611.87 1012793.27 00:06:15.256 00:06:15.256 [2024-11-15 11:24:55.567108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9289a0 (9): Bad file descriptor 00:06:15.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:15.256 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.256 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:15.256 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2822923 00:06:15.256 11:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2822923 00:06:15.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2822923) - No such process 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2822923 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2822923 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2822923 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.822 [2024-11-15 11:24:56.086748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.822 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2823453 00:06:15.823 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:15.823 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:15.823 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2823453 00:06:15.823 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:15.823 [2024-11-15 11:24:56.152223] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:16.389 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:16.389 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2823453 00:06:16.389 11:24:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:16.954 11:24:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:16.954 11:24:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2823453 00:06:16.954 11:24:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:17.212 11:24:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:17.212 11:24:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2823453 00:06:17.212 11:24:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:17.781 11:24:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:17.781 11:24:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2823453 00:06:17.781 11:24:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:18.347 11:24:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:18.347 11:24:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2823453 00:06:18.347 11:24:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:18.913 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:18.913 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2823453 00:06:18.913 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:18.913 Initializing NVMe Controllers 00:06:18.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:18.913 Controller IO queue size 128, less than required. 00:06:18.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:18.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:18.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:18.913 Initialization complete. Launching workers. 00:06:18.913 ======================================================== 00:06:18.913 Latency(us) 00:06:18.913 Device Information : IOPS MiB/s Average min max 00:06:18.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004170.82 1000195.82 1012296.37 00:06:18.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004523.82 1000177.30 1041451.33 00:06:18.913 ======================================================== 00:06:18.913 Total : 256.00 0.12 1004347.32 1000177.30 1041451.33 00:06:18.913 00:06:19.478 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:19.478 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2823453 00:06:19.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2823453) - No such process 00:06:19.478 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2823453 00:06:19.478 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:19.478 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:19.478 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:19.478 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:19.478 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:19.478 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:19.478 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:19.479 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:19.479 rmmod nvme_tcp 00:06:19.479 rmmod nvme_fabrics 00:06:19.479 rmmod nvme_keyring 00:06:19.479 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:19.479 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:19.479 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:19.479 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2822901 ']' 00:06:19.479 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2822901 00:06:19.479 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2822901 ']' 00:06:19.479 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2822901 00:06:19.479 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:19.479 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.479 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2822901 00:06:19.479 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.479 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.479 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2822901' 00:06:19.479 killing process with pid 2822901 00:06:19.479 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2822901 00:06:19.479 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2822901 00:06:19.742 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:19.742 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:19.742 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:19.742 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:19.742 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:19.742 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:19.742 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:19.742 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:19.742 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:19.742 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.742 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:19.742 11:24:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.651 11:25:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:21.651 00:06:21.651 real 0m12.440s 00:06:21.651 user 0m27.840s 00:06:21.651 sys 0m2.992s 00:06:21.651 11:25:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.651 11:25:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.651 ************************************ 00:06:21.651 END TEST nvmf_delete_subsystem 00:06:21.651 ************************************ 00:06:21.651 11:25:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:21.651 11:25:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:21.651 11:25:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.651 11:25:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:21.651 ************************************ 00:06:21.651 START TEST nvmf_host_management 00:06:21.651 ************************************ 00:06:21.651 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:21.912 * Looking for test storage... 00:06:21.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:21.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.912 --rc genhtml_branch_coverage=1 00:06:21.912 --rc genhtml_function_coverage=1 00:06:21.912 --rc genhtml_legend=1 00:06:21.912 --rc geninfo_all_blocks=1 00:06:21.912 --rc geninfo_unexecuted_blocks=1 00:06:21.912 00:06:21.912 ' 00:06:21.912 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:21.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.912 --rc genhtml_branch_coverage=1 00:06:21.912 --rc genhtml_function_coverage=1 00:06:21.913 --rc genhtml_legend=1 00:06:21.913 --rc geninfo_all_blocks=1 00:06:21.913 --rc geninfo_unexecuted_blocks=1 00:06:21.913 00:06:21.913 ' 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:21.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.913 --rc genhtml_branch_coverage=1 00:06:21.913 --rc genhtml_function_coverage=1 00:06:21.913 --rc genhtml_legend=1 00:06:21.913 --rc geninfo_all_blocks=1 00:06:21.913 --rc geninfo_unexecuted_blocks=1 00:06:21.913 00:06:21.913 ' 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:21.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.913 --rc genhtml_branch_coverage=1 00:06:21.913 --rc genhtml_function_coverage=1 00:06:21.913 --rc genhtml_legend=1 00:06:21.913 --rc geninfo_all_blocks=1 00:06:21.913 --rc geninfo_unexecuted_blocks=1 00:06:21.913 00:06:21.913 ' 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:21.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:21.913 11:25:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:24.446 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:24.446 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:24.446 Found net devices under 0000:09:00.0: cvl_0_0 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:24.446 Found net devices under 0000:09:00.1: cvl_0_1 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:24.446 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:24.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:24.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:06:24.447 00:06:24.447 --- 10.0.0.2 ping statistics --- 00:06:24.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.447 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:24.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:24.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:06:24.447 00:06:24.447 --- 10.0.0.1 ping statistics --- 00:06:24.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.447 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2825807 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2825807 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2825807 ']' 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.447 [2024-11-15 11:25:04.554056] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:06:24.447 [2024-11-15 11:25:04.554134] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.447 [2024-11-15 11:25:04.628142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.447 [2024-11-15 11:25:04.689753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:24.447 [2024-11-15 11:25:04.689805] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:24.447 [2024-11-15 11:25:04.689833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:24.447 [2024-11-15 11:25:04.689844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:24.447 [2024-11-15 11:25:04.689854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:24.447 [2024-11-15 11:25:04.691434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.447 [2024-11-15 11:25:04.691499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.447 [2024-11-15 11:25:04.691569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.447 [2024-11-15 11:25:04.691566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.447 [2024-11-15 11:25:04.851809] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:24.447 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.705 Malloc0 00:06:24.705 [2024-11-15 11:25:04.930624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2825849 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2825849 /var/tmp/bdevperf.sock 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2825849 ']' 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:24.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:24.705 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:24.705 { 00:06:24.705 "params": { 00:06:24.705 "name": "Nvme$subsystem", 00:06:24.705 "trtype": "$TEST_TRANSPORT", 00:06:24.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:24.705 "adrfam": "ipv4", 00:06:24.706 "trsvcid": "$NVMF_PORT", 00:06:24.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:24.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:24.706 "hdgst": ${hdgst:-false}, 00:06:24.706 "ddgst": ${ddgst:-false} 00:06:24.706 }, 00:06:24.706 "method": "bdev_nvme_attach_controller" 00:06:24.706 } 00:06:24.706 EOF 00:06:24.706 )") 00:06:24.706 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:24.706 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:24.706 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:24.706 11:25:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:24.706 "params": { 00:06:24.706 "name": "Nvme0", 00:06:24.706 "trtype": "tcp", 00:06:24.706 "traddr": "10.0.0.2", 00:06:24.706 "adrfam": "ipv4", 00:06:24.706 "trsvcid": "4420", 00:06:24.706 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:24.706 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:24.706 "hdgst": false, 00:06:24.706 "ddgst": false 00:06:24.706 }, 00:06:24.706 "method": "bdev_nvme_attach_controller" 00:06:24.706 }' 00:06:24.706 [2024-11-15 11:25:05.016712] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:06:24.706 [2024-11-15 11:25:05.016800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825849 ] 00:06:24.706 [2024-11-15 11:25:05.090454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.964 [2024-11-15 11:25:05.151711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.223 Running I/O for 10 seconds... 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:25.223 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:25.483 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:25.483 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:25.483 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:25.483 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.483 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:25.483 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.483 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.483 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:06:25.483 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:06:25.483 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:25.483 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:25.483 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:25.483 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:25.483 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.483 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.483 [2024-11-15 11:25:05.857942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.483 [2024-11-15 11:25:05.858724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.483 [2024-11-15 11:25:05.858739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.858752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.858767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.858780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.858795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.858808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.858823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.858837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.858851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.858865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.858879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.858892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.858907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.858920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.858938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.858953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.858968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.858982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.858997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.484 [2024-11-15 11:25:05.859908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.484 [2024-11-15 11:25:05.859921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.485 [2024-11-15 11:25:05.861150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:25.485 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.485 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:25.485 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.485 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.485 task offset: 85760 on job bdev=Nvme0n1 fails 00:06:25.485 00:06:25.485 Latency(us) 00:06:25.485 [2024-11-15T10:25:05.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:25.485 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:25.485 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:25.485 Verification LBA range: start 0x0 length 0x400 00:06:25.485 Nvme0n1 : 0.40 1588.75 99.30 158.87 0.00 35562.84 2633.58 34758.35 00:06:25.485 [2024-11-15T10:25:05.912Z] =================================================================================================================== 00:06:25.485 [2024-11-15T10:25:05.912Z] Total : 1588.75 99.30 158.87 0.00 35562.84 2633.58 34758.35 00:06:25.485 [2024-11-15 11:25:05.863108] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.485 [2024-11-15 11:25:05.863145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fda40 (9): Bad file descriptor 00:06:25.485 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.485 11:25:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:25.743 [2024-11-15 11:25:05.966423] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:26.676 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2825849 00:06:26.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2825849) - No such process 00:06:26.676 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:26.676 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:26.676 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:26.676 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:26.676 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:26.676 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:26.676 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:26.676 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:26.676 { 00:06:26.676 "params": { 00:06:26.676 "name": "Nvme$subsystem", 00:06:26.676 "trtype": "$TEST_TRANSPORT", 00:06:26.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:26.676 "adrfam": "ipv4", 00:06:26.676 "trsvcid": "$NVMF_PORT", 00:06:26.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:26.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:26.676 "hdgst": ${hdgst:-false}, 00:06:26.676 "ddgst": ${ddgst:-false} 00:06:26.676 }, 00:06:26.676 "method": "bdev_nvme_attach_controller" 00:06:26.676 } 00:06:26.676 EOF 00:06:26.676 )") 00:06:26.676 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:26.676 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:26.676 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:26.676 11:25:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:26.676 "params": { 00:06:26.676 "name": "Nvme0", 00:06:26.676 "trtype": "tcp", 00:06:26.676 "traddr": "10.0.0.2", 00:06:26.676 "adrfam": "ipv4", 00:06:26.676 "trsvcid": "4420", 00:06:26.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:26.676 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:26.676 "hdgst": false, 00:06:26.676 "ddgst": false 00:06:26.676 }, 00:06:26.676 "method": "bdev_nvme_attach_controller" 00:06:26.676 }' 00:06:26.676 [2024-11-15 11:25:06.922476] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:06:26.676 [2024-11-15 11:25:06.922571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826131 ] 00:06:26.676 [2024-11-15 11:25:06.990971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.676 [2024-11-15 11:25:07.053390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.934 Running I/O for 1 seconds... 00:06:28.308 1600.00 IOPS, 100.00 MiB/s 00:06:28.308 Latency(us) 00:06:28.308 [2024-11-15T10:25:08.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:28.308 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:28.308 Verification LBA range: start 0x0 length 0x400 00:06:28.308 Nvme0n1 : 1.05 1585.07 99.07 0.00 0.00 38259.15 9320.68 50098.63 00:06:28.308 [2024-11-15T10:25:08.735Z] =================================================================================================================== 00:06:28.308 [2024-11-15T10:25:08.735Z] Total : 1585.07 99.07 0.00 0.00 38259.15 9320.68 50098.63 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:28.308 rmmod nvme_tcp 00:06:28.308 rmmod nvme_fabrics 00:06:28.308 rmmod nvme_keyring 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2825807 ']' 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2825807 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2825807 ']' 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2825807 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2825807 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2825807' 00:06:28.308 killing process with pid 2825807 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2825807 00:06:28.308 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2825807 00:06:28.567 [2024-11-15 11:25:08.874195] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:28.567 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:28.567 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:28.567 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:28.567 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:28.567 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:28.567 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:28.567 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:28.567 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:28.567 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:28.567 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.567 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:28.567 11:25:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.104 11:25:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:31.104 11:25:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:31.104 00:06:31.104 real 0m8.903s 00:06:31.104 user 0m20.077s 00:06:31.104 sys 0m2.810s 00:06:31.104 11:25:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.104 11:25:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:31.104 ************************************ 00:06:31.104 END TEST nvmf_host_management 00:06:31.104 ************************************ 00:06:31.104 11:25:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:31.104 11:25:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.104 11:25:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.104 11:25:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:31.104 ************************************ 00:06:31.104 START TEST nvmf_lvol 00:06:31.104 ************************************ 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:31.104 * Looking for test storage... 00:06:31.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:31.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.104 --rc genhtml_branch_coverage=1 00:06:31.104 --rc genhtml_function_coverage=1 00:06:31.104 --rc genhtml_legend=1 00:06:31.104 --rc geninfo_all_blocks=1 00:06:31.104 --rc geninfo_unexecuted_blocks=1 00:06:31.104 00:06:31.104 ' 00:06:31.104 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:31.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.104 --rc genhtml_branch_coverage=1 00:06:31.104 --rc genhtml_function_coverage=1 00:06:31.104 --rc genhtml_legend=1 00:06:31.104 --rc geninfo_all_blocks=1 00:06:31.104 --rc geninfo_unexecuted_blocks=1 00:06:31.105 00:06:31.105 ' 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:31.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.105 --rc genhtml_branch_coverage=1 00:06:31.105 --rc genhtml_function_coverage=1 00:06:31.105 --rc genhtml_legend=1 00:06:31.105 --rc geninfo_all_blocks=1 00:06:31.105 --rc geninfo_unexecuted_blocks=1 00:06:31.105 00:06:31.105 ' 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:31.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.105 --rc genhtml_branch_coverage=1 00:06:31.105 --rc genhtml_function_coverage=1 00:06:31.105 --rc genhtml_legend=1 00:06:31.105 --rc geninfo_all_blocks=1 00:06:31.105 --rc geninfo_unexecuted_blocks=1 00:06:31.105 00:06:31.105 ' 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:31.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:31.105 11:25:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:33.011 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:33.011 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:33.011 Found net devices under 0000:09:00.0: cvl_0_0 00:06:33.011 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:33.012 Found net devices under 0000:09:00.1: cvl_0_1 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:33.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:33.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:06:33.012 00:06:33.012 --- 10.0.0.2 ping statistics --- 00:06:33.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.012 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:33.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:33.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:06:33.012 00:06:33.012 --- 10.0.0.1 ping statistics --- 00:06:33.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.012 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:33.012 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:33.270 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:33.270 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:33.270 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.270 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:33.270 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2828353 00:06:33.270 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:33.270 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2828353 00:06:33.270 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2828353 ']' 00:06:33.270 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.270 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.270 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.270 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.270 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:33.270 [2024-11-15 11:25:13.492008] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:06:33.270 [2024-11-15 11:25:13.492084] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.270 [2024-11-15 11:25:13.565789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.270 [2024-11-15 11:25:13.625714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:33.270 [2024-11-15 11:25:13.625786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:33.270 [2024-11-15 11:25:13.625815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.270 [2024-11-15 11:25:13.625827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.270 [2024-11-15 11:25:13.625837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:33.270 [2024-11-15 11:25:13.627329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.270 [2024-11-15 11:25:13.627398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.270 [2024-11-15 11:25:13.627402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.529 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.529 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:33.529 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:33.529 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.529 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:33.529 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:33.529 11:25:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:33.786 [2024-11-15 11:25:14.022724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.786 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:34.044 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:34.044 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:34.303 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:34.303 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:34.560 11:25:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:34.818 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=420fedc0-1617-4cc7-baec-0a5e92ec6806 00:06:34.818 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 420fedc0-1617-4cc7-baec-0a5e92ec6806 lvol 20 00:06:35.076 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1d486c67-34ee-4124-b444-cd1d5055a590 00:06:35.076 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:35.334 11:25:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1d486c67-34ee-4124-b444-cd1d5055a590 00:06:35.899 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:35.899 [2024-11-15 11:25:16.258237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:35.899 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:36.157 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2828777 00:06:36.157 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:36.157 11:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:37.531 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1d486c67-34ee-4124-b444-cd1d5055a590 MY_SNAPSHOT 00:06:37.531 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2f6eda8b-bcc4-4807-ab41-39f1355ded4c 00:06:37.531 11:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1d486c67-34ee-4124-b444-cd1d5055a590 30 00:06:37.789 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2f6eda8b-bcc4-4807-ab41-39f1355ded4c MY_CLONE 00:06:38.355 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e8c1f710-1ea2-48eb-925e-c089ae2bb6b3 00:06:38.355 11:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e8c1f710-1ea2-48eb-925e-c089ae2bb6b3 00:06:38.973 11:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2828777 00:06:47.101 Initializing NVMe Controllers 00:06:47.101 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:47.101 Controller IO queue size 128, less than required. 00:06:47.101 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:47.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:47.101 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:47.101 Initialization complete. Launching workers. 00:06:47.101 ======================================================== 00:06:47.102 Latency(us) 00:06:47.102 Device Information : IOPS MiB/s Average min max 00:06:47.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10404.80 40.64 12310.57 2171.05 98240.62 00:06:47.102 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10157.80 39.68 12604.21 2232.99 72308.29 00:06:47.102 ======================================================== 00:06:47.102 Total : 20562.60 80.32 12455.63 2171.05 98240.62 00:06:47.102 00:06:47.102 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:47.102 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1d486c67-34ee-4124-b444-cd1d5055a590 00:06:47.358 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 420fedc0-1617-4cc7-baec-0a5e92ec6806 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:47.615 rmmod nvme_tcp 00:06:47.615 rmmod nvme_fabrics 00:06:47.615 rmmod nvme_keyring 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2828353 ']' 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2828353 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2828353 ']' 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2828353 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2828353 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2828353' 00:06:47.615 killing process with pid 2828353 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2828353 00:06:47.615 11:25:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2828353 00:06:47.874 11:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:47.874 11:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:47.874 11:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:47.874 11:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:47.875 11:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:47.875 11:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:47.875 11:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:47.875 11:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:47.875 11:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:47.875 11:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.875 11:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:47.875 11:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:50.413 00:06:50.413 real 0m19.248s 00:06:50.413 user 1m5.894s 00:06:50.413 sys 0m5.349s 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:50.413 ************************************ 00:06:50.413 END TEST nvmf_lvol 00:06:50.413 ************************************ 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:50.413 ************************************ 00:06:50.413 START TEST nvmf_lvs_grow 00:06:50.413 ************************************ 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:50.413 * Looking for test storage... 00:06:50.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.413 --rc genhtml_branch_coverage=1 00:06:50.413 --rc genhtml_function_coverage=1 00:06:50.413 --rc genhtml_legend=1 00:06:50.413 --rc geninfo_all_blocks=1 00:06:50.413 --rc geninfo_unexecuted_blocks=1 00:06:50.413 00:06:50.413 ' 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.413 --rc genhtml_branch_coverage=1 00:06:50.413 --rc genhtml_function_coverage=1 00:06:50.413 --rc genhtml_legend=1 00:06:50.413 --rc geninfo_all_blocks=1 00:06:50.413 --rc geninfo_unexecuted_blocks=1 00:06:50.413 00:06:50.413 ' 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.413 --rc genhtml_branch_coverage=1 00:06:50.413 --rc genhtml_function_coverage=1 00:06:50.413 --rc genhtml_legend=1 00:06:50.413 --rc geninfo_all_blocks=1 00:06:50.413 --rc geninfo_unexecuted_blocks=1 00:06:50.413 00:06:50.413 ' 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.413 --rc genhtml_branch_coverage=1 00:06:50.413 --rc genhtml_function_coverage=1 00:06:50.413 --rc genhtml_legend=1 00:06:50.413 --rc geninfo_all_blocks=1 00:06:50.413 --rc geninfo_unexecuted_blocks=1 00:06:50.413 00:06:50.413 ' 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.413 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:50.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:50.414 11:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:52.311 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:52.312 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:52.312 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:52.312 Found net devices under 0000:09:00.0: cvl_0_0 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:52.312 Found net devices under 0000:09:00.1: cvl_0_1 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:52.312 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:52.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:06:52.571 00:06:52.571 --- 10.0.0.2 ping statistics --- 00:06:52.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.571 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:52.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:52.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:06:52.571 00:06:52.571 --- 10.0.0.1 ping statistics --- 00:06:52.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.571 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2832067 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2832067 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2832067 ']' 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.571 11:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:52.571 [2024-11-15 11:25:32.920438] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:06:52.571 [2024-11-15 11:25:32.920542] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.571 [2024-11-15 11:25:32.994335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.829 [2024-11-15 11:25:33.049875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:52.829 [2024-11-15 11:25:33.049932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:52.829 [2024-11-15 11:25:33.049960] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:52.829 [2024-11-15 11:25:33.049971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:52.829 [2024-11-15 11:25:33.049982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:52.829 [2024-11-15 11:25:33.050567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.829 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.829 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:52.829 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:52.829 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:52.829 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:52.829 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:52.829 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:53.086 [2024-11-15 11:25:33.439543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.086 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:53.086 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.086 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.086 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:53.086 ************************************ 00:06:53.086 START TEST lvs_grow_clean 00:06:53.086 ************************************ 00:06:53.086 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:53.086 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:53.087 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:53.087 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:53.087 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:53.087 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:53.087 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:53.087 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:53.087 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:53.087 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:53.652 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:53.652 11:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:53.652 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=968733bb-0914-4eff-8f9f-b086af80a1c7 00:06:53.652 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 968733bb-0914-4eff-8f9f-b086af80a1c7 00:06:53.652 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:53.909 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:53.909 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:53.910 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 968733bb-0914-4eff-8f9f-b086af80a1c7 lvol 150 00:06:54.491 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=208cce77-0494-443e-a717-b9abbb75cade 00:06:54.491 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:54.491 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:54.491 [2024-11-15 11:25:34.847724] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:54.491 [2024-11-15 11:25:34.847819] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:54.491 true 00:06:54.491 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 968733bb-0914-4eff-8f9f-b086af80a1c7 00:06:54.491 11:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:54.749 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:54.749 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:55.007 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 208cce77-0494-443e-a717-b9abbb75cade 00:06:55.264 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:55.522 [2024-11-15 11:25:35.927023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.522 11:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:56.089 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2832508 00:06:56.089 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:56.089 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:56.089 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2832508 /var/tmp/bdevperf.sock 00:06:56.089 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2832508 ']' 00:06:56.089 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:56.089 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.089 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:56.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:56.089 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.089 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:56.089 [2024-11-15 11:25:36.254662] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:06:56.089 [2024-11-15 11:25:36.254743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832508 ] 00:06:56.089 [2024-11-15 11:25:36.319512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.089 [2024-11-15 11:25:36.376473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.089 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.089 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:56.089 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:56.655 Nvme0n1 00:06:56.655 11:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:56.913 [ 00:06:56.913 { 00:06:56.913 "name": "Nvme0n1", 00:06:56.913 "aliases": [ 00:06:56.913 "208cce77-0494-443e-a717-b9abbb75cade" 00:06:56.913 ], 00:06:56.913 "product_name": "NVMe disk", 00:06:56.913 "block_size": 4096, 00:06:56.913 "num_blocks": 38912, 00:06:56.913 "uuid": "208cce77-0494-443e-a717-b9abbb75cade", 00:06:56.913 "numa_id": 0, 00:06:56.913 "assigned_rate_limits": { 00:06:56.913 "rw_ios_per_sec": 0, 00:06:56.913 "rw_mbytes_per_sec": 0, 00:06:56.913 "r_mbytes_per_sec": 0, 00:06:56.913 "w_mbytes_per_sec": 0 00:06:56.913 }, 00:06:56.913 "claimed": false, 00:06:56.913 "zoned": false, 00:06:56.913 "supported_io_types": { 00:06:56.913 "read": true, 00:06:56.913 "write": true, 00:06:56.913 "unmap": true, 00:06:56.913 "flush": true, 00:06:56.913 "reset": true, 00:06:56.913 "nvme_admin": true, 00:06:56.913 "nvme_io": true, 00:06:56.913 "nvme_io_md": false, 00:06:56.913 "write_zeroes": true, 00:06:56.913 "zcopy": false, 00:06:56.913 "get_zone_info": false, 00:06:56.913 "zone_management": false, 00:06:56.913 "zone_append": false, 00:06:56.913 "compare": true, 00:06:56.913 "compare_and_write": true, 00:06:56.913 "abort": true, 00:06:56.913 "seek_hole": false, 00:06:56.913 "seek_data": false, 00:06:56.913 "copy": true, 00:06:56.913 "nvme_iov_md": false 00:06:56.913 }, 00:06:56.913 "memory_domains": [ 00:06:56.913 { 00:06:56.913 "dma_device_id": "system", 00:06:56.913 "dma_device_type": 1 00:06:56.913 } 00:06:56.913 ], 00:06:56.913 "driver_specific": { 00:06:56.913 "nvme": [ 00:06:56.913 { 00:06:56.913 "trid": { 00:06:56.913 "trtype": "TCP", 00:06:56.913 "adrfam": "IPv4", 00:06:56.913 "traddr": "10.0.0.2", 00:06:56.913 "trsvcid": "4420", 00:06:56.913 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:56.913 }, 00:06:56.913 "ctrlr_data": { 00:06:56.913 "cntlid": 1, 00:06:56.913 "vendor_id": "0x8086", 00:06:56.913 "model_number": "SPDK bdev Controller", 00:06:56.913 "serial_number": "SPDK0", 00:06:56.913 "firmware_revision": "25.01", 00:06:56.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:56.913 "oacs": { 00:06:56.913 "security": 0, 00:06:56.913 "format": 0, 00:06:56.913 "firmware": 0, 00:06:56.913 "ns_manage": 0 00:06:56.913 }, 00:06:56.913 "multi_ctrlr": true, 00:06:56.913 "ana_reporting": false 00:06:56.913 }, 00:06:56.913 "vs": { 00:06:56.913 "nvme_version": "1.3" 00:06:56.913 }, 00:06:56.913 "ns_data": { 00:06:56.913 "id": 1, 00:06:56.913 "can_share": true 00:06:56.913 } 00:06:56.913 } 00:06:56.913 ], 00:06:56.913 "mp_policy": "active_passive" 00:06:56.913 } 00:06:56.913 } 00:06:56.913 ] 00:06:56.913 11:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2832643 00:06:56.913 11:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:56.913 11:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:56.913 Running I/O for 10 seconds... 00:06:58.286 Latency(us) 00:06:58.286 [2024-11-15T10:25:38.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:58.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.286 Nvme0n1 : 1.00 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:06:58.286 [2024-11-15T10:25:38.713Z] =================================================================================================================== 00:06:58.286 [2024-11-15T10:25:38.713Z] Total : 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:06:58.286 00:06:58.852 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 968733bb-0914-4eff-8f9f-b086af80a1c7 00:06:59.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.109 Nvme0n1 : 2.00 15194.00 59.35 0.00 0.00 0.00 0.00 0.00 00:06:59.109 [2024-11-15T10:25:39.536Z] =================================================================================================================== 00:06:59.109 [2024-11-15T10:25:39.537Z] Total : 15194.00 59.35 0.00 0.00 0.00 0.00 0.00 00:06:59.110 00:06:59.110 true 00:06:59.110 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 968733bb-0914-4eff-8f9f-b086af80a1c7 00:06:59.110 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:59.368 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:59.368 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:59.368 11:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2832643 00:06:59.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.934 Nvme0n1 : 3.00 15273.33 59.66 0.00 0.00 0.00 0.00 0.00 00:06:59.934 [2024-11-15T10:25:40.361Z] =================================================================================================================== 00:06:59.934 [2024-11-15T10:25:40.361Z] Total : 15273.33 59.66 0.00 0.00 0.00 0.00 0.00 00:06:59.934 00:07:01.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.307 Nvme0n1 : 4.00 15377.00 60.07 0.00 0.00 0.00 0.00 0.00 00:07:01.307 [2024-11-15T10:25:41.734Z] =================================================================================================================== 00:07:01.307 [2024-11-15T10:25:41.734Z] Total : 15377.00 60.07 0.00 0.00 0.00 0.00 0.00 00:07:01.307 00:07:02.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.242 Nvme0n1 : 5.00 15464.00 60.41 0.00 0.00 0.00 0.00 0.00 00:07:02.242 [2024-11-15T10:25:42.669Z] =================================================================================================================== 00:07:02.242 [2024-11-15T10:25:42.669Z] Total : 15464.00 60.41 0.00 0.00 0.00 0.00 0.00 00:07:02.242 00:07:03.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.175 Nvme0n1 : 6.00 15521.83 60.63 0.00 0.00 0.00 0.00 0.00 00:07:03.175 [2024-11-15T10:25:43.602Z] =================================================================================================================== 00:07:03.175 [2024-11-15T10:25:43.602Z] Total : 15521.83 60.63 0.00 0.00 0.00 0.00 0.00 00:07:03.175 00:07:04.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.108 Nvme0n1 : 7.00 15572.29 60.83 0.00 0.00 0.00 0.00 0.00 00:07:04.108 [2024-11-15T10:25:44.535Z] =================================================================================================================== 00:07:04.108 [2024-11-15T10:25:44.535Z] Total : 15572.29 60.83 0.00 0.00 0.00 0.00 0.00 00:07:04.108 00:07:05.042 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.042 Nvme0n1 : 8.00 15607.00 60.96 0.00 0.00 0.00 0.00 0.00 00:07:05.042 [2024-11-15T10:25:45.469Z] =================================================================================================================== 00:07:05.042 [2024-11-15T10:25:45.469Z] Total : 15607.00 60.96 0.00 0.00 0.00 0.00 0.00 00:07:05.042 00:07:05.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.977 Nvme0n1 : 9.00 15636.78 61.08 0.00 0.00 0.00 0.00 0.00 00:07:05.977 [2024-11-15T10:25:46.404Z] =================================================================================================================== 00:07:05.977 [2024-11-15T10:25:46.404Z] Total : 15636.78 61.08 0.00 0.00 0.00 0.00 0.00 00:07:05.977 00:07:07.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.350 Nvme0n1 : 10.00 15660.60 61.17 0.00 0.00 0.00 0.00 0.00 00:07:07.350 [2024-11-15T10:25:47.777Z] =================================================================================================================== 00:07:07.350 [2024-11-15T10:25:47.777Z] Total : 15660.60 61.17 0.00 0.00 0.00 0.00 0.00 00:07:07.350 00:07:07.350 00:07:07.350 Latency(us) 00:07:07.350 [2024-11-15T10:25:47.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.350 Nvme0n1 : 10.01 15664.45 61.19 0.00 0.00 8166.79 2512.21 15922.82 00:07:07.350 [2024-11-15T10:25:47.777Z] =================================================================================================================== 00:07:07.350 [2024-11-15T10:25:47.777Z] Total : 15664.45 61.19 0.00 0.00 8166.79 2512.21 15922.82 00:07:07.350 { 00:07:07.350 "results": [ 00:07:07.350 { 00:07:07.350 "job": "Nvme0n1", 00:07:07.350 "core_mask": "0x2", 00:07:07.350 "workload": "randwrite", 00:07:07.350 "status": "finished", 00:07:07.350 "queue_depth": 128, 00:07:07.350 "io_size": 4096, 00:07:07.350 "runtime": 10.005714, 00:07:07.350 "iops": 15664.449333650751, 00:07:07.350 "mibps": 61.18925520957325, 00:07:07.350 "io_failed": 0, 00:07:07.350 "io_timeout": 0, 00:07:07.350 "avg_latency_us": 8166.788190607441, 00:07:07.350 "min_latency_us": 2512.213333333333, 00:07:07.350 "max_latency_us": 15922.82074074074 00:07:07.350 } 00:07:07.350 ], 00:07:07.350 "core_count": 1 00:07:07.350 } 00:07:07.350 11:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2832508 00:07:07.350 11:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2832508 ']' 00:07:07.350 11:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2832508 00:07:07.350 11:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:07.350 11:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.350 11:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832508 00:07:07.350 11:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:07.350 11:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:07.350 11:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832508' 00:07:07.350 killing process with pid 2832508 00:07:07.350 11:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2832508 00:07:07.350 Received shutdown signal, test time was about 10.000000 seconds 00:07:07.350 00:07:07.350 Latency(us) 00:07:07.350 [2024-11-15T10:25:47.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.350 [2024-11-15T10:25:47.777Z] =================================================================================================================== 00:07:07.350 [2024-11-15T10:25:47.777Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:07.350 11:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2832508 00:07:07.350 11:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:07.609 11:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:07.866 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 968733bb-0914-4eff-8f9f-b086af80a1c7 00:07:07.866 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:08.125 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:08.125 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:08.125 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:08.382 [2024-11-15 11:25:48.690156] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:08.382 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 968733bb-0914-4eff-8f9f-b086af80a1c7 00:07:08.382 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:08.382 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 968733bb-0914-4eff-8f9f-b086af80a1c7 00:07:08.382 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:08.382 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.382 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:08.382 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.382 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:08.382 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.382 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:08.382 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:08.382 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 968733bb-0914-4eff-8f9f-b086af80a1c7 00:07:08.640 request: 00:07:08.640 { 00:07:08.640 "uuid": "968733bb-0914-4eff-8f9f-b086af80a1c7", 00:07:08.640 "method": "bdev_lvol_get_lvstores", 00:07:08.640 "req_id": 1 00:07:08.640 } 00:07:08.640 Got JSON-RPC error response 00:07:08.640 response: 00:07:08.640 { 00:07:08.640 "code": -19, 00:07:08.640 "message": "No such device" 00:07:08.640 } 00:07:08.640 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:08.640 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.640 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:08.640 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.640 11:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:08.898 aio_bdev 00:07:08.898 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 208cce77-0494-443e-a717-b9abbb75cade 00:07:08.898 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=208cce77-0494-443e-a717-b9abbb75cade 00:07:08.898 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:08.898 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:08.898 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:08.898 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:08.898 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:09.155 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 208cce77-0494-443e-a717-b9abbb75cade -t 2000 00:07:09.414 [ 00:07:09.414 { 00:07:09.414 "name": "208cce77-0494-443e-a717-b9abbb75cade", 00:07:09.414 "aliases": [ 00:07:09.414 "lvs/lvol" 00:07:09.414 ], 00:07:09.414 "product_name": "Logical Volume", 00:07:09.414 "block_size": 4096, 00:07:09.414 "num_blocks": 38912, 00:07:09.414 "uuid": "208cce77-0494-443e-a717-b9abbb75cade", 00:07:09.414 "assigned_rate_limits": { 00:07:09.414 "rw_ios_per_sec": 0, 00:07:09.414 "rw_mbytes_per_sec": 0, 00:07:09.414 "r_mbytes_per_sec": 0, 00:07:09.414 "w_mbytes_per_sec": 0 00:07:09.414 }, 00:07:09.414 "claimed": false, 00:07:09.414 "zoned": false, 00:07:09.414 "supported_io_types": { 00:07:09.414 "read": true, 00:07:09.414 "write": true, 00:07:09.414 "unmap": true, 00:07:09.414 "flush": false, 00:07:09.414 "reset": true, 00:07:09.414 "nvme_admin": false, 00:07:09.414 "nvme_io": false, 00:07:09.414 "nvme_io_md": false, 00:07:09.414 "write_zeroes": true, 00:07:09.414 "zcopy": false, 00:07:09.414 "get_zone_info": false, 00:07:09.414 "zone_management": false, 00:07:09.414 "zone_append": false, 00:07:09.414 "compare": false, 00:07:09.414 "compare_and_write": false, 00:07:09.414 "abort": false, 00:07:09.414 "seek_hole": true, 00:07:09.414 "seek_data": true, 00:07:09.414 "copy": false, 00:07:09.414 "nvme_iov_md": false 00:07:09.414 }, 00:07:09.414 "driver_specific": { 00:07:09.414 "lvol": { 00:07:09.414 "lvol_store_uuid": "968733bb-0914-4eff-8f9f-b086af80a1c7", 00:07:09.414 "base_bdev": "aio_bdev", 00:07:09.414 "thin_provision": false, 00:07:09.414 "num_allocated_clusters": 38, 00:07:09.414 "snapshot": false, 00:07:09.414 "clone": false, 00:07:09.414 "esnap_clone": false 00:07:09.414 } 00:07:09.414 } 00:07:09.414 } 00:07:09.414 ] 00:07:09.414 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:09.414 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 968733bb-0914-4eff-8f9f-b086af80a1c7 00:07:09.414 11:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:09.672 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:09.672 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 968733bb-0914-4eff-8f9f-b086af80a1c7 00:07:09.672 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:09.934 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:09.934 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 208cce77-0494-443e-a717-b9abbb75cade 00:07:10.287 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 968733bb-0914-4eff-8f9f-b086af80a1c7 00:07:10.556 11:25:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:10.813 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:10.813 00:07:10.813 real 0m17.739s 00:07:10.813 user 0m17.298s 00:07:10.813 sys 0m1.829s 00:07:10.813 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.813 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:10.813 ************************************ 00:07:10.813 END TEST lvs_grow_clean 00:07:10.813 ************************************ 00:07:11.071 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:11.071 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.071 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.071 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.071 ************************************ 00:07:11.071 START TEST lvs_grow_dirty 00:07:11.071 ************************************ 00:07:11.071 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:11.071 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:11.071 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:11.071 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:11.071 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:11.071 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:11.071 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:11.071 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:11.071 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:11.071 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:11.328 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:11.328 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:11.586 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=afb6505e-0dde-45a1-84b2-b9e52ef8478c 00:07:11.586 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afb6505e-0dde-45a1-84b2-b9e52ef8478c 00:07:11.586 11:25:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:11.844 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:11.844 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:11.844 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u afb6505e-0dde-45a1-84b2-b9e52ef8478c lvol 150 00:07:12.101 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=92710835-ff99-417f-8255-d4e178ed25a3 00:07:12.101 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:12.101 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:12.359 [2024-11-15 11:25:52.642689] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:12.359 [2024-11-15 11:25:52.642791] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:12.359 true 00:07:12.359 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afb6505e-0dde-45a1-84b2-b9e52ef8478c 00:07:12.359 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:12.617 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:12.617 11:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:12.876 11:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 92710835-ff99-417f-8255-d4e178ed25a3 00:07:13.134 11:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:13.392 [2024-11-15 11:25:53.725976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.392 11:25:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:13.650 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2834712 00:07:13.650 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:13.650 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:13.650 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2834712 /var/tmp/bdevperf.sock 00:07:13.650 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2834712 ']' 00:07:13.650 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:13.650 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.650 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:13.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:13.650 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.650 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:13.650 [2024-11-15 11:25:54.057177] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:07:13.650 [2024-11-15 11:25:54.057250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834712 ] 00:07:13.908 [2024-11-15 11:25:54.123464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.908 [2024-11-15 11:25:54.184644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.908 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.908 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:13.908 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:14.473 Nvme0n1 00:07:14.473 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:14.473 [ 00:07:14.473 { 00:07:14.473 "name": "Nvme0n1", 00:07:14.473 "aliases": [ 00:07:14.473 "92710835-ff99-417f-8255-d4e178ed25a3" 00:07:14.473 ], 00:07:14.473 "product_name": "NVMe disk", 00:07:14.473 "block_size": 4096, 00:07:14.473 "num_blocks": 38912, 00:07:14.473 "uuid": "92710835-ff99-417f-8255-d4e178ed25a3", 00:07:14.473 "numa_id": 0, 00:07:14.473 "assigned_rate_limits": { 00:07:14.473 "rw_ios_per_sec": 0, 00:07:14.473 "rw_mbytes_per_sec": 0, 00:07:14.473 "r_mbytes_per_sec": 0, 00:07:14.473 "w_mbytes_per_sec": 0 00:07:14.473 }, 00:07:14.473 "claimed": false, 00:07:14.473 "zoned": false, 00:07:14.473 "supported_io_types": { 00:07:14.473 "read": true, 00:07:14.473 "write": true, 00:07:14.473 "unmap": true, 00:07:14.473 "flush": true, 00:07:14.473 "reset": true, 00:07:14.473 "nvme_admin": true, 00:07:14.473 "nvme_io": true, 00:07:14.473 "nvme_io_md": false, 00:07:14.473 "write_zeroes": true, 00:07:14.473 "zcopy": false, 00:07:14.473 "get_zone_info": false, 00:07:14.473 "zone_management": false, 00:07:14.473 "zone_append": false, 00:07:14.473 "compare": true, 00:07:14.473 "compare_and_write": true, 00:07:14.473 "abort": true, 00:07:14.473 "seek_hole": false, 00:07:14.473 "seek_data": false, 00:07:14.473 "copy": true, 00:07:14.473 "nvme_iov_md": false 00:07:14.473 }, 00:07:14.473 "memory_domains": [ 00:07:14.473 { 00:07:14.473 "dma_device_id": "system", 00:07:14.473 "dma_device_type": 1 00:07:14.473 } 00:07:14.473 ], 00:07:14.473 "driver_specific": { 00:07:14.473 "nvme": [ 00:07:14.473 { 00:07:14.473 "trid": { 00:07:14.473 "trtype": "TCP", 00:07:14.473 "adrfam": "IPv4", 00:07:14.473 "traddr": "10.0.0.2", 00:07:14.473 "trsvcid": "4420", 00:07:14.473 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:14.473 }, 00:07:14.473 "ctrlr_data": { 00:07:14.473 "cntlid": 1, 00:07:14.473 "vendor_id": "0x8086", 00:07:14.473 "model_number": "SPDK bdev Controller", 00:07:14.473 "serial_number": "SPDK0", 00:07:14.473 "firmware_revision": "25.01", 00:07:14.473 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:14.473 "oacs": { 00:07:14.473 "security": 0, 00:07:14.473 "format": 0, 00:07:14.473 "firmware": 0, 00:07:14.473 "ns_manage": 0 00:07:14.473 }, 00:07:14.473 "multi_ctrlr": true, 00:07:14.473 "ana_reporting": false 00:07:14.473 }, 00:07:14.473 "vs": { 00:07:14.473 "nvme_version": "1.3" 00:07:14.473 }, 00:07:14.473 "ns_data": { 00:07:14.473 "id": 1, 00:07:14.473 "can_share": true 00:07:14.473 } 00:07:14.473 } 00:07:14.473 ], 00:07:14.473 "mp_policy": "active_passive" 00:07:14.473 } 00:07:14.473 } 00:07:14.473 ] 00:07:14.731 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2834850 00:07:14.731 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:14.731 11:25:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:14.731 Running I/O for 10 seconds... 00:07:15.666 Latency(us) 00:07:15.666 [2024-11-15T10:25:56.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.666 Nvme0n1 : 1.00 15179.00 59.29 0.00 0.00 0.00 0.00 0.00 00:07:15.666 [2024-11-15T10:25:56.093Z] =================================================================================================================== 00:07:15.666 [2024-11-15T10:25:56.093Z] Total : 15179.00 59.29 0.00 0.00 0.00 0.00 0.00 00:07:15.666 00:07:16.600 11:25:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u afb6505e-0dde-45a1-84b2-b9e52ef8478c 00:07:16.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.858 Nvme0n1 : 2.00 15336.50 59.91 0.00 0.00 0.00 0.00 0.00 00:07:16.858 [2024-11-15T10:25:57.285Z] =================================================================================================================== 00:07:16.858 [2024-11-15T10:25:57.285Z] Total : 15336.50 59.91 0.00 0.00 0.00 0.00 0.00 00:07:16.858 00:07:16.858 true 00:07:16.858 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afb6505e-0dde-45a1-84b2-b9e52ef8478c 00:07:16.858 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:17.117 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:17.117 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:17.117 11:25:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2834850 00:07:17.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.683 Nvme0n1 : 3.00 15453.00 60.36 0.00 0.00 0.00 0.00 0.00 00:07:17.683 [2024-11-15T10:25:58.110Z] =================================================================================================================== 00:07:17.683 [2024-11-15T10:25:58.110Z] Total : 15453.00 60.36 0.00 0.00 0.00 0.00 0.00 00:07:17.683 00:07:18.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.617 Nvme0n1 : 4.00 15558.50 60.78 0.00 0.00 0.00 0.00 0.00 00:07:18.617 [2024-11-15T10:25:59.044Z] =================================================================================================================== 00:07:18.617 [2024-11-15T10:25:59.044Z] Total : 15558.50 60.78 0.00 0.00 0.00 0.00 0.00 00:07:18.617 00:07:19.989 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.989 Nvme0n1 : 5.00 15647.20 61.12 0.00 0.00 0.00 0.00 0.00 00:07:19.989 [2024-11-15T10:26:00.416Z] =================================================================================================================== 00:07:19.989 [2024-11-15T10:26:00.416Z] Total : 15647.20 61.12 0.00 0.00 0.00 0.00 0.00 00:07:19.989 00:07:20.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.921 Nvme0n1 : 6.00 15664.00 61.19 0.00 0.00 0.00 0.00 0.00 00:07:20.921 [2024-11-15T10:26:01.348Z] =================================================================================================================== 00:07:20.921 [2024-11-15T10:26:01.348Z] Total : 15664.00 61.19 0.00 0.00 0.00 0.00 0.00 00:07:20.921 00:07:21.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.852 Nvme0n1 : 7.00 15721.57 61.41 0.00 0.00 0.00 0.00 0.00 00:07:21.852 [2024-11-15T10:26:02.279Z] =================================================================================================================== 00:07:21.852 [2024-11-15T10:26:02.279Z] Total : 15721.57 61.41 0.00 0.00 0.00 0.00 0.00 00:07:21.852 00:07:22.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.784 Nvme0n1 : 8.00 15780.62 61.64 0.00 0.00 0.00 0.00 0.00 00:07:22.784 [2024-11-15T10:26:03.211Z] =================================================================================================================== 00:07:22.784 [2024-11-15T10:26:03.211Z] Total : 15780.62 61.64 0.00 0.00 0.00 0.00 0.00 00:07:22.784 00:07:23.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.718 Nvme0n1 : 9.00 15826.56 61.82 0.00 0.00 0.00 0.00 0.00 00:07:23.718 [2024-11-15T10:26:04.145Z] =================================================================================================================== 00:07:23.718 [2024-11-15T10:26:04.145Z] Total : 15826.56 61.82 0.00 0.00 0.00 0.00 0.00 00:07:23.718 00:07:24.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.651 Nvme0n1 : 10.00 15856.80 61.94 0.00 0.00 0.00 0.00 0.00 00:07:24.651 [2024-11-15T10:26:05.078Z] =================================================================================================================== 00:07:24.651 [2024-11-15T10:26:05.078Z] Total : 15856.80 61.94 0.00 0.00 0.00 0.00 0.00 00:07:24.651 00:07:24.651 00:07:24.651 Latency(us) 00:07:24.651 [2024-11-15T10:26:05.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.651 Nvme0n1 : 10.01 15859.52 61.95 0.00 0.00 8066.44 2706.39 15437.37 00:07:24.651 [2024-11-15T10:26:05.078Z] =================================================================================================================== 00:07:24.651 [2024-11-15T10:26:05.078Z] Total : 15859.52 61.95 0.00 0.00 8066.44 2706.39 15437.37 00:07:24.651 { 00:07:24.651 "results": [ 00:07:24.651 { 00:07:24.651 "job": "Nvme0n1", 00:07:24.651 "core_mask": "0x2", 00:07:24.651 "workload": "randwrite", 00:07:24.651 "status": "finished", 00:07:24.651 "queue_depth": 128, 00:07:24.651 "io_size": 4096, 00:07:24.651 "runtime": 10.006355, 00:07:24.651 "iops": 15859.521274230226, 00:07:24.651 "mibps": 61.95125497746182, 00:07:24.651 "io_failed": 0, 00:07:24.651 "io_timeout": 0, 00:07:24.651 "avg_latency_us": 8066.443982457024, 00:07:24.651 "min_latency_us": 2706.394074074074, 00:07:24.651 "max_latency_us": 15437.368888888888 00:07:24.651 } 00:07:24.651 ], 00:07:24.651 "core_count": 1 00:07:24.651 } 00:07:24.651 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2834712 00:07:24.651 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2834712 ']' 00:07:24.651 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2834712 00:07:24.651 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:24.651 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.651 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2834712 00:07:24.909 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:24.909 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:24.909 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2834712' 00:07:24.909 killing process with pid 2834712 00:07:24.909 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2834712 00:07:24.909 Received shutdown signal, test time was about 10.000000 seconds 00:07:24.909 00:07:24.909 Latency(us) 00:07:24.909 [2024-11-15T10:26:05.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.909 [2024-11-15T10:26:05.336Z] =================================================================================================================== 00:07:24.909 [2024-11-15T10:26:05.336Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:24.909 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2834712 00:07:24.909 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:25.475 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:25.475 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afb6505e-0dde-45a1-84b2-b9e52ef8478c 00:07:25.475 11:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:25.733 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:25.733 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:25.733 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2832067 00:07:25.733 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2832067 00:07:25.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2832067 Killed "${NVMF_APP[@]}" "$@" 00:07:25.991 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:25.991 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:25.991 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.991 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.991 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:25.991 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2836185 00:07:25.991 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:25.991 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2836185 00:07:25.991 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2836185 ']' 00:07:25.991 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.991 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.991 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.991 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.991 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:25.991 [2024-11-15 11:26:06.245448] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:07:25.991 [2024-11-15 11:26:06.245541] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.991 [2024-11-15 11:26:06.321747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.991 [2024-11-15 11:26:06.377600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.991 [2024-11-15 11:26:06.377667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.991 [2024-11-15 11:26:06.377681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.991 [2024-11-15 11:26:06.377691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.991 [2024-11-15 11:26:06.377700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.991 [2024-11-15 11:26:06.378246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.249 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.249 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:26.249 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:26.249 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:26.249 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:26.249 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.249 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:26.507 [2024-11-15 11:26:06.767364] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:26.507 [2024-11-15 11:26:06.767509] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:26.507 [2024-11-15 11:26:06.767557] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:26.507 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:26.507 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 92710835-ff99-417f-8255-d4e178ed25a3 00:07:26.507 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=92710835-ff99-417f-8255-d4e178ed25a3 00:07:26.507 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:26.507 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:26.507 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:26.507 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:26.507 11:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:26.765 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 92710835-ff99-417f-8255-d4e178ed25a3 -t 2000 00:07:27.023 [ 00:07:27.023 { 00:07:27.023 "name": "92710835-ff99-417f-8255-d4e178ed25a3", 00:07:27.023 "aliases": [ 00:07:27.023 "lvs/lvol" 00:07:27.023 ], 00:07:27.023 "product_name": "Logical Volume", 00:07:27.023 "block_size": 4096, 00:07:27.023 "num_blocks": 38912, 00:07:27.023 "uuid": "92710835-ff99-417f-8255-d4e178ed25a3", 00:07:27.023 "assigned_rate_limits": { 00:07:27.023 "rw_ios_per_sec": 0, 00:07:27.023 "rw_mbytes_per_sec": 0, 00:07:27.023 "r_mbytes_per_sec": 0, 00:07:27.023 "w_mbytes_per_sec": 0 00:07:27.023 }, 00:07:27.023 "claimed": false, 00:07:27.023 "zoned": false, 00:07:27.023 "supported_io_types": { 00:07:27.023 "read": true, 00:07:27.023 "write": true, 00:07:27.023 "unmap": true, 00:07:27.023 "flush": false, 00:07:27.023 "reset": true, 00:07:27.023 "nvme_admin": false, 00:07:27.023 "nvme_io": false, 00:07:27.023 "nvme_io_md": false, 00:07:27.023 "write_zeroes": true, 00:07:27.023 "zcopy": false, 00:07:27.023 "get_zone_info": false, 00:07:27.023 "zone_management": false, 00:07:27.023 "zone_append": false, 00:07:27.023 "compare": false, 00:07:27.023 "compare_and_write": false, 00:07:27.023 "abort": false, 00:07:27.023 "seek_hole": true, 00:07:27.023 "seek_data": true, 00:07:27.023 "copy": false, 00:07:27.023 "nvme_iov_md": false 00:07:27.023 }, 00:07:27.023 "driver_specific": { 00:07:27.023 "lvol": { 00:07:27.023 "lvol_store_uuid": "afb6505e-0dde-45a1-84b2-b9e52ef8478c", 00:07:27.023 "base_bdev": "aio_bdev", 00:07:27.023 "thin_provision": false, 00:07:27.023 "num_allocated_clusters": 38, 00:07:27.023 "snapshot": false, 00:07:27.023 "clone": false, 00:07:27.024 "esnap_clone": false 00:07:27.024 } 00:07:27.024 } 00:07:27.024 } 00:07:27.024 ] 00:07:27.024 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:27.024 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afb6505e-0dde-45a1-84b2-b9e52ef8478c 00:07:27.024 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:27.281 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:27.281 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afb6505e-0dde-45a1-84b2-b9e52ef8478c 00:07:27.281 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:27.539 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:27.539 11:26:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:27.798 [2024-11-15 11:26:08.120919] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:27.798 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afb6505e-0dde-45a1-84b2-b9e52ef8478c 00:07:27.798 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:27.798 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afb6505e-0dde-45a1-84b2-b9e52ef8478c 00:07:27.798 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.798 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.798 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.798 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.798 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.798 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.798 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.798 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:27.798 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afb6505e-0dde-45a1-84b2-b9e52ef8478c 00:07:28.057 request: 00:07:28.057 { 00:07:28.057 "uuid": "afb6505e-0dde-45a1-84b2-b9e52ef8478c", 00:07:28.057 "method": "bdev_lvol_get_lvstores", 00:07:28.057 "req_id": 1 00:07:28.057 } 00:07:28.057 Got JSON-RPC error response 00:07:28.057 response: 00:07:28.057 { 00:07:28.057 "code": -19, 00:07:28.057 "message": "No such device" 00:07:28.057 } 00:07:28.057 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:28.057 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.057 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.057 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.057 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:28.316 aio_bdev 00:07:28.316 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 92710835-ff99-417f-8255-d4e178ed25a3 00:07:28.316 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=92710835-ff99-417f-8255-d4e178ed25a3 00:07:28.316 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:28.316 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:28.316 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:28.316 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:28.316 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:28.575 11:26:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 92710835-ff99-417f-8255-d4e178ed25a3 -t 2000 00:07:28.833 [ 00:07:28.833 { 00:07:28.833 "name": "92710835-ff99-417f-8255-d4e178ed25a3", 00:07:28.833 "aliases": [ 00:07:28.833 "lvs/lvol" 00:07:28.833 ], 00:07:28.833 "product_name": "Logical Volume", 00:07:28.833 "block_size": 4096, 00:07:28.833 "num_blocks": 38912, 00:07:28.833 "uuid": "92710835-ff99-417f-8255-d4e178ed25a3", 00:07:28.833 "assigned_rate_limits": { 00:07:28.833 "rw_ios_per_sec": 0, 00:07:28.833 "rw_mbytes_per_sec": 0, 00:07:28.833 "r_mbytes_per_sec": 0, 00:07:28.833 "w_mbytes_per_sec": 0 00:07:28.833 }, 00:07:28.833 "claimed": false, 00:07:28.833 "zoned": false, 00:07:28.833 "supported_io_types": { 00:07:28.833 "read": true, 00:07:28.833 "write": true, 00:07:28.833 "unmap": true, 00:07:28.833 "flush": false, 00:07:28.833 "reset": true, 00:07:28.833 "nvme_admin": false, 00:07:28.833 "nvme_io": false, 00:07:28.833 "nvme_io_md": false, 00:07:28.833 "write_zeroes": true, 00:07:28.833 "zcopy": false, 00:07:28.833 "get_zone_info": false, 00:07:28.833 "zone_management": false, 00:07:28.833 "zone_append": false, 00:07:28.833 "compare": false, 00:07:28.833 "compare_and_write": false, 00:07:28.833 "abort": false, 00:07:28.833 "seek_hole": true, 00:07:28.833 "seek_data": true, 00:07:28.833 "copy": false, 00:07:28.833 "nvme_iov_md": false 00:07:28.833 }, 00:07:28.833 "driver_specific": { 00:07:28.833 "lvol": { 00:07:28.833 "lvol_store_uuid": "afb6505e-0dde-45a1-84b2-b9e52ef8478c", 00:07:28.833 "base_bdev": "aio_bdev", 00:07:28.833 "thin_provision": false, 00:07:28.833 "num_allocated_clusters": 38, 00:07:28.833 "snapshot": false, 00:07:28.833 "clone": false, 00:07:28.833 "esnap_clone": false 00:07:28.833 } 00:07:28.833 } 00:07:28.833 } 00:07:28.833 ] 00:07:28.834 11:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:28.834 11:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afb6505e-0dde-45a1-84b2-b9e52ef8478c 00:07:28.834 11:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:29.091 11:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:29.091 11:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u afb6505e-0dde-45a1-84b2-b9e52ef8478c 00:07:29.091 11:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:29.658 11:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:29.658 11:26:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 92710835-ff99-417f-8255-d4e178ed25a3 00:07:29.658 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u afb6505e-0dde-45a1-84b2-b9e52ef8478c 00:07:30.224 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:30.224 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:30.224 00:07:30.224 real 0m19.356s 00:07:30.224 user 0m49.266s 00:07:30.224 sys 0m4.466s 00:07:30.224 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.224 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:30.224 ************************************ 00:07:30.224 END TEST lvs_grow_dirty 00:07:30.224 ************************************ 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:30.482 nvmf_trace.0 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:30.482 rmmod nvme_tcp 00:07:30.482 rmmod nvme_fabrics 00:07:30.482 rmmod nvme_keyring 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2836185 ']' 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2836185 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2836185 ']' 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2836185 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2836185 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2836185' 00:07:30.482 killing process with pid 2836185 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2836185 00:07:30.482 11:26:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2836185 00:07:30.741 11:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:30.741 11:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:30.741 11:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:30.741 11:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:30.741 11:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:30.741 11:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:30.741 11:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:30.741 11:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:30.741 11:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:30.741 11:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.741 11:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.741 11:26:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.646 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:32.646 00:07:32.646 real 0m42.751s 00:07:32.646 user 1m12.678s 00:07:32.646 sys 0m8.368s 00:07:32.646 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.646 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.646 ************************************ 00:07:32.646 END TEST nvmf_lvs_grow 00:07:32.646 ************************************ 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:32.905 ************************************ 00:07:32.905 START TEST nvmf_bdev_io_wait 00:07:32.905 ************************************ 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:32.905 * Looking for test storage... 00:07:32.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.905 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:32.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.906 --rc genhtml_branch_coverage=1 00:07:32.906 --rc genhtml_function_coverage=1 00:07:32.906 --rc genhtml_legend=1 00:07:32.906 --rc geninfo_all_blocks=1 00:07:32.906 --rc geninfo_unexecuted_blocks=1 00:07:32.906 00:07:32.906 ' 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:32.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.906 --rc genhtml_branch_coverage=1 00:07:32.906 --rc genhtml_function_coverage=1 00:07:32.906 --rc genhtml_legend=1 00:07:32.906 --rc geninfo_all_blocks=1 00:07:32.906 --rc geninfo_unexecuted_blocks=1 00:07:32.906 00:07:32.906 ' 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:32.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.906 --rc genhtml_branch_coverage=1 00:07:32.906 --rc genhtml_function_coverage=1 00:07:32.906 --rc genhtml_legend=1 00:07:32.906 --rc geninfo_all_blocks=1 00:07:32.906 --rc geninfo_unexecuted_blocks=1 00:07:32.906 00:07:32.906 ' 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:32.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.906 --rc genhtml_branch_coverage=1 00:07:32.906 --rc genhtml_function_coverage=1 00:07:32.906 --rc genhtml_legend=1 00:07:32.906 --rc geninfo_all_blocks=1 00:07:32.906 --rc geninfo_unexecuted_blocks=1 00:07:32.906 00:07:32.906 ' 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:32.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:32.906 11:26:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.441 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:35.442 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:35.442 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:35.442 Found net devices under 0000:09:00.0: cvl_0_0 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:35.442 Found net devices under 0000:09:00.1: cvl_0_1 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:35.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:07:35.442 00:07:35.442 --- 10.0.0.2 ping statistics --- 00:07:35.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.442 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:07:35.442 00:07:35.442 --- 10.0.0.1 ping statistics --- 00:07:35.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.442 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2838724 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2838724 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2838724 ']' 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.442 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.443 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.443 [2024-11-15 11:26:15.709964] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:07:35.443 [2024-11-15 11:26:15.710055] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.443 [2024-11-15 11:26:15.782113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.443 [2024-11-15 11:26:15.844342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.443 [2024-11-15 11:26:15.844394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.443 [2024-11-15 11:26:15.844422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.443 [2024-11-15 11:26:15.844434] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.443 [2024-11-15 11:26:15.844443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.443 [2024-11-15 11:26:15.845973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.443 [2024-11-15 11:26:15.846032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.443 [2024-11-15 11:26:15.846098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.443 [2024-11-15 11:26:15.846101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.702 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.702 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:35.702 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:35.702 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:35.702 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.702 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.702 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:35.702 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.702 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.702 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.702 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:35.702 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.702 11:26:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.702 [2024-11-15 11:26:16.041462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.702 Malloc0 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:35.702 [2024-11-15 11:26:16.095142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2838752 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2838754 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:35.702 { 00:07:35.702 "params": { 00:07:35.702 "name": "Nvme$subsystem", 00:07:35.702 "trtype": "$TEST_TRANSPORT", 00:07:35.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:35.702 "adrfam": "ipv4", 00:07:35.702 "trsvcid": "$NVMF_PORT", 00:07:35.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:35.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:35.702 "hdgst": ${hdgst:-false}, 00:07:35.702 "ddgst": ${ddgst:-false} 00:07:35.702 }, 00:07:35.702 "method": "bdev_nvme_attach_controller" 00:07:35.702 } 00:07:35.702 EOF 00:07:35.702 )") 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2838756 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:35.702 { 00:07:35.702 "params": { 00:07:35.702 "name": "Nvme$subsystem", 00:07:35.702 "trtype": "$TEST_TRANSPORT", 00:07:35.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:35.702 "adrfam": "ipv4", 00:07:35.702 "trsvcid": "$NVMF_PORT", 00:07:35.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:35.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:35.702 "hdgst": ${hdgst:-false}, 00:07:35.702 "ddgst": ${ddgst:-false} 00:07:35.702 }, 00:07:35.702 "method": "bdev_nvme_attach_controller" 00:07:35.702 } 00:07:35.702 EOF 00:07:35.702 )") 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2838758 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:35.702 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:35.702 { 00:07:35.702 "params": { 00:07:35.702 "name": "Nvme$subsystem", 00:07:35.703 "trtype": "$TEST_TRANSPORT", 00:07:35.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:35.703 "adrfam": "ipv4", 00:07:35.703 "trsvcid": "$NVMF_PORT", 00:07:35.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:35.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:35.703 "hdgst": ${hdgst:-false}, 00:07:35.703 "ddgst": ${ddgst:-false} 00:07:35.703 }, 00:07:35.703 "method": "bdev_nvme_attach_controller" 00:07:35.703 } 00:07:35.703 EOF 00:07:35.703 )") 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:35.703 { 00:07:35.703 "params": { 00:07:35.703 "name": "Nvme$subsystem", 00:07:35.703 "trtype": "$TEST_TRANSPORT", 00:07:35.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:35.703 "adrfam": "ipv4", 00:07:35.703 "trsvcid": "$NVMF_PORT", 00:07:35.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:35.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:35.703 "hdgst": ${hdgst:-false}, 00:07:35.703 "ddgst": ${ddgst:-false} 00:07:35.703 }, 00:07:35.703 "method": "bdev_nvme_attach_controller" 00:07:35.703 } 00:07:35.703 EOF 00:07:35.703 )") 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2838752 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:35.703 "params": { 00:07:35.703 "name": "Nvme1", 00:07:35.703 "trtype": "tcp", 00:07:35.703 "traddr": "10.0.0.2", 00:07:35.703 "adrfam": "ipv4", 00:07:35.703 "trsvcid": "4420", 00:07:35.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:35.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:35.703 "hdgst": false, 00:07:35.703 "ddgst": false 00:07:35.703 }, 00:07:35.703 "method": "bdev_nvme_attach_controller" 00:07:35.703 }' 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:35.703 "params": { 00:07:35.703 "name": "Nvme1", 00:07:35.703 "trtype": "tcp", 00:07:35.703 "traddr": "10.0.0.2", 00:07:35.703 "adrfam": "ipv4", 00:07:35.703 "trsvcid": "4420", 00:07:35.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:35.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:35.703 "hdgst": false, 00:07:35.703 "ddgst": false 00:07:35.703 }, 00:07:35.703 "method": "bdev_nvme_attach_controller" 00:07:35.703 }' 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:35.703 "params": { 00:07:35.703 "name": "Nvme1", 00:07:35.703 "trtype": "tcp", 00:07:35.703 "traddr": "10.0.0.2", 00:07:35.703 "adrfam": "ipv4", 00:07:35.703 "trsvcid": "4420", 00:07:35.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:35.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:35.703 "hdgst": false, 00:07:35.703 "ddgst": false 00:07:35.703 }, 00:07:35.703 "method": "bdev_nvme_attach_controller" 00:07:35.703 }' 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:35.703 11:26:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:35.703 "params": { 00:07:35.703 "name": "Nvme1", 00:07:35.703 "trtype": "tcp", 00:07:35.703 "traddr": "10.0.0.2", 00:07:35.703 "adrfam": "ipv4", 00:07:35.703 "trsvcid": "4420", 00:07:35.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:35.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:35.703 "hdgst": false, 00:07:35.703 "ddgst": false 00:07:35.703 }, 00:07:35.703 "method": "bdev_nvme_attach_controller" 00:07:35.703 }' 00:07:35.961 [2024-11-15 11:26:16.146712] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:07:35.961 [2024-11-15 11:26:16.146716] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:07:35.961 [2024-11-15 11:26:16.146712] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:07:35.961 [2024-11-15 11:26:16.146799] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-15 11:26:16.146800] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-15 11:26:16.146799] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:35.961 --proc-type=auto ] 00:07:35.961 --proc-type=auto ] 00:07:35.961 [2024-11-15 11:26:16.146838] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:07:35.961 [2024-11-15 11:26:16.146903] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:35.961 [2024-11-15 11:26:16.344919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.219 [2024-11-15 11:26:16.400751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:36.219 [2024-11-15 11:26:16.449640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.219 [2024-11-15 11:26:16.505544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:36.219 [2024-11-15 11:26:16.552420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.219 [2024-11-15 11:26:16.610117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:36.219 [2024-11-15 11:26:16.629593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.476 [2024-11-15 11:26:16.683068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:36.476 Running I/O for 1 seconds... 00:07:36.476 Running I/O for 1 seconds... 00:07:36.476 Running I/O for 1 seconds... 00:07:36.734 Running I/O for 1 seconds... 00:07:37.668 190368.00 IOPS, 743.62 MiB/s 00:07:37.668 Latency(us) 00:07:37.668 [2024-11-15T10:26:18.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.668 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:37.668 Nvme1n1 : 1.00 190003.02 742.20 0.00 0.00 670.05 289.75 1893.26 00:07:37.668 [2024-11-15T10:26:18.095Z] =================================================================================================================== 00:07:37.668 [2024-11-15T10:26:18.095Z] Total : 190003.02 742.20 0.00 0.00 670.05 289.75 1893.26 00:07:37.668 6666.00 IOPS, 26.04 MiB/s 00:07:37.668 Latency(us) 00:07:37.668 [2024-11-15T10:26:18.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.668 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:37.668 Nvme1n1 : 1.02 6668.72 26.05 0.00 0.00 18992.19 9175.04 29515.47 00:07:37.668 [2024-11-15T10:26:18.095Z] =================================================================================================================== 00:07:37.668 [2024-11-15T10:26:18.095Z] Total : 6668.72 26.05 0.00 0.00 18992.19 9175.04 29515.47 00:07:37.668 9252.00 IOPS, 36.14 MiB/s 00:07:37.668 Latency(us) 00:07:37.668 [2024-11-15T10:26:18.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.668 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:37.668 Nvme1n1 : 1.01 9305.26 36.35 0.00 0.00 13690.36 6796.33 24369.68 00:07:37.668 [2024-11-15T10:26:18.095Z] =================================================================================================================== 00:07:37.668 [2024-11-15T10:26:18.095Z] Total : 9305.26 36.35 0.00 0.00 13690.36 6796.33 24369.68 00:07:37.668 6846.00 IOPS, 26.74 MiB/s 00:07:37.668 Latency(us) 00:07:37.668 [2024-11-15T10:26:18.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.668 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:37.668 Nvme1n1 : 1.01 6951.81 27.16 0.00 0.00 18363.73 3592.34 46215.02 00:07:37.668 [2024-11-15T10:26:18.095Z] =================================================================================================================== 00:07:37.668 [2024-11-15T10:26:18.095Z] Total : 6951.81 27.16 0.00 0.00 18363.73 3592.34 46215.02 00:07:37.668 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2838754 00:07:37.668 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2838756 00:07:37.668 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2838758 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:37.926 rmmod nvme_tcp 00:07:37.926 rmmod nvme_fabrics 00:07:37.926 rmmod nvme_keyring 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2838724 ']' 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2838724 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2838724 ']' 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2838724 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2838724 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2838724' 00:07:37.926 killing process with pid 2838724 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2838724 00:07:37.926 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2838724 00:07:38.185 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:38.185 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:38.185 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:38.185 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:38.185 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:38.185 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:38.185 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:38.185 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:38.185 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:38.185 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.185 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.185 11:26:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.087 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:40.347 00:07:40.347 real 0m7.404s 00:07:40.347 user 0m16.357s 00:07:40.347 sys 0m3.654s 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:40.347 ************************************ 00:07:40.347 END TEST nvmf_bdev_io_wait 00:07:40.347 ************************************ 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.347 ************************************ 00:07:40.347 START TEST nvmf_queue_depth 00:07:40.347 ************************************ 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:40.347 * Looking for test storage... 00:07:40.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:40.347 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:40.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.348 --rc genhtml_branch_coverage=1 00:07:40.348 --rc genhtml_function_coverage=1 00:07:40.348 --rc genhtml_legend=1 00:07:40.348 --rc geninfo_all_blocks=1 00:07:40.348 --rc geninfo_unexecuted_blocks=1 00:07:40.348 00:07:40.348 ' 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:40.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.348 --rc genhtml_branch_coverage=1 00:07:40.348 --rc genhtml_function_coverage=1 00:07:40.348 --rc genhtml_legend=1 00:07:40.348 --rc geninfo_all_blocks=1 00:07:40.348 --rc geninfo_unexecuted_blocks=1 00:07:40.348 00:07:40.348 ' 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:40.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.348 --rc genhtml_branch_coverage=1 00:07:40.348 --rc genhtml_function_coverage=1 00:07:40.348 --rc genhtml_legend=1 00:07:40.348 --rc geninfo_all_blocks=1 00:07:40.348 --rc geninfo_unexecuted_blocks=1 00:07:40.348 00:07:40.348 ' 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:40.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.348 --rc genhtml_branch_coverage=1 00:07:40.348 --rc genhtml_function_coverage=1 00:07:40.348 --rc genhtml_legend=1 00:07:40.348 --rc geninfo_all_blocks=1 00:07:40.348 --rc geninfo_unexecuted_blocks=1 00:07:40.348 00:07:40.348 ' 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:40.348 11:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:42.886 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:42.886 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:42.886 Found net devices under 0000:09:00.0: cvl_0_0 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:42.886 Found net devices under 0000:09:00.1: cvl_0_1 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.886 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:42.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:07:42.887 00:07:42.887 --- 10.0.0.2 ping statistics --- 00:07:42.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.887 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:07:42.887 00:07:42.887 --- 10.0.0.1 ping statistics --- 00:07:42.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.887 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2841035 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2841035 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2841035 ']' 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.887 11:26:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:42.887 [2024-11-15 11:26:23.045972] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:07:42.887 [2024-11-15 11:26:23.046052] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.887 [2024-11-15 11:26:23.122236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.887 [2024-11-15 11:26:23.179540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.887 [2024-11-15 11:26:23.179599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.887 [2024-11-15 11:26:23.179612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.887 [2024-11-15 11:26:23.179623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.887 [2024-11-15 11:26:23.179632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.887 [2024-11-15 11:26:23.180197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.887 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.887 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:42.887 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:42.887 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:42.887 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:43.145 [2024-11-15 11:26:23.320778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:43.145 Malloc0 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:43.145 [2024-11-15 11:26:23.369747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2841136 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2841136 /var/tmp/bdevperf.sock 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2841136 ']' 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:43.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.145 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:43.145 [2024-11-15 11:26:23.417347] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:07:43.145 [2024-11-15 11:26:23.417423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2841136 ] 00:07:43.145 [2024-11-15 11:26:23.481679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.145 [2024-11-15 11:26:23.539387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.404 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.404 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:43.404 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:43.404 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.404 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:43.662 NVMe0n1 00:07:43.662 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.662 11:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:43.662 Running I/O for 10 seconds... 00:07:45.969 8192.00 IOPS, 32.00 MiB/s [2024-11-15T10:26:27.331Z] 8192.00 IOPS, 32.00 MiB/s [2024-11-15T10:26:28.265Z] 8513.67 IOPS, 33.26 MiB/s [2024-11-15T10:26:29.198Z] 8449.25 IOPS, 33.00 MiB/s [2024-11-15T10:26:30.132Z] 8565.80 IOPS, 33.46 MiB/s [2024-11-15T10:26:31.066Z] 8531.83 IOPS, 33.33 MiB/s [2024-11-15T10:26:32.438Z] 8582.14 IOPS, 33.52 MiB/s [2024-11-15T10:26:33.371Z] 8569.75 IOPS, 33.48 MiB/s [2024-11-15T10:26:34.302Z] 8570.11 IOPS, 33.48 MiB/s [2024-11-15T10:26:34.303Z] 8596.60 IOPS, 33.58 MiB/s 00:07:53.876 Latency(us) 00:07:53.876 [2024-11-15T10:26:34.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.876 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:53.876 Verification LBA range: start 0x0 length 0x4000 00:07:53.876 NVMe0n1 : 10.08 8626.08 33.70 0.00 0.00 118259.18 21068.61 70681.79 00:07:53.876 [2024-11-15T10:26:34.303Z] =================================================================================================================== 00:07:53.876 [2024-11-15T10:26:34.303Z] Total : 8626.08 33.70 0.00 0.00 118259.18 21068.61 70681.79 00:07:53.876 { 00:07:53.876 "results": [ 00:07:53.876 { 00:07:53.876 "job": "NVMe0n1", 00:07:53.876 "core_mask": "0x1", 00:07:53.876 "workload": "verify", 00:07:53.876 "status": "finished", 00:07:53.876 "verify_range": { 00:07:53.876 "start": 0, 00:07:53.876 "length": 16384 00:07:53.876 }, 00:07:53.876 "queue_depth": 1024, 00:07:53.876 "io_size": 4096, 00:07:53.876 "runtime": 10.084537, 00:07:53.876 "iops": 8626.077726721613, 00:07:53.876 "mibps": 33.6956161200063, 00:07:53.876 "io_failed": 0, 00:07:53.876 "io_timeout": 0, 00:07:53.876 "avg_latency_us": 118259.17724400848, 00:07:53.876 "min_latency_us": 21068.61037037037, 00:07:53.876 "max_latency_us": 70681.78962962962 00:07:53.876 } 00:07:53.876 ], 00:07:53.876 "core_count": 1 00:07:53.876 } 00:07:53.876 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2841136 00:07:53.876 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2841136 ']' 00:07:53.876 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2841136 00:07:53.876 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:53.876 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.876 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2841136 00:07:53.876 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.876 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.876 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2841136' 00:07:53.876 killing process with pid 2841136 00:07:53.876 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2841136 00:07:53.876 Received shutdown signal, test time was about 10.000000 seconds 00:07:53.876 00:07:53.876 Latency(us) 00:07:53.876 [2024-11-15T10:26:34.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.876 [2024-11-15T10:26:34.303Z] =================================================================================================================== 00:07:53.876 [2024-11-15T10:26:34.303Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:53.876 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2841136 00:07:54.133 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:54.133 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:54.133 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:54.133 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:54.133 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:54.133 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:54.133 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:54.133 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:54.134 rmmod nvme_tcp 00:07:54.134 rmmod nvme_fabrics 00:07:54.134 rmmod nvme_keyring 00:07:54.134 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:54.134 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:54.134 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:54.134 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2841035 ']' 00:07:54.134 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2841035 00:07:54.134 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2841035 ']' 00:07:54.134 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2841035 00:07:54.134 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:54.134 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.134 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2841035 00:07:54.134 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:54.134 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:54.134 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2841035' 00:07:54.134 killing process with pid 2841035 00:07:54.134 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2841035 00:07:54.134 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2841035 00:07:54.393 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:54.393 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:54.393 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:54.393 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:54.393 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:54.393 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:54.393 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:54.393 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:54.393 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:54.393 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.393 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.393 11:26:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:56.929 00:07:56.929 real 0m16.211s 00:07:56.929 user 0m22.860s 00:07:56.929 sys 0m3.024s 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.929 ************************************ 00:07:56.929 END TEST nvmf_queue_depth 00:07:56.929 ************************************ 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:56.929 ************************************ 00:07:56.929 START TEST nvmf_target_multipath 00:07:56.929 ************************************ 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:56.929 * Looking for test storage... 00:07:56.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:56.929 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:56.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.930 --rc genhtml_branch_coverage=1 00:07:56.930 --rc genhtml_function_coverage=1 00:07:56.930 --rc genhtml_legend=1 00:07:56.930 --rc geninfo_all_blocks=1 00:07:56.930 --rc geninfo_unexecuted_blocks=1 00:07:56.930 00:07:56.930 ' 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:56.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.930 --rc genhtml_branch_coverage=1 00:07:56.930 --rc genhtml_function_coverage=1 00:07:56.930 --rc genhtml_legend=1 00:07:56.930 --rc geninfo_all_blocks=1 00:07:56.930 --rc geninfo_unexecuted_blocks=1 00:07:56.930 00:07:56.930 ' 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:56.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.930 --rc genhtml_branch_coverage=1 00:07:56.930 --rc genhtml_function_coverage=1 00:07:56.930 --rc genhtml_legend=1 00:07:56.930 --rc geninfo_all_blocks=1 00:07:56.930 --rc geninfo_unexecuted_blocks=1 00:07:56.930 00:07:56.930 ' 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:56.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.930 --rc genhtml_branch_coverage=1 00:07:56.930 --rc genhtml_function_coverage=1 00:07:56.930 --rc genhtml_legend=1 00:07:56.930 --rc geninfo_all_blocks=1 00:07:56.930 --rc geninfo_unexecuted_blocks=1 00:07:56.930 00:07:56.930 ' 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:56.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:56.930 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:56.931 11:26:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.964 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:58.965 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:58.965 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:58.965 Found net devices under 0000:09:00.0: cvl_0_0 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:58.965 Found net devices under 0000:09:00.1: cvl_0_1 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:58.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:07:58.965 00:07:58.965 --- 10.0.0.2 ping statistics --- 00:07:58.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.965 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:58.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:07:58.965 00:07:58.965 --- 10.0.0.1 ping statistics --- 00:07:58.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.965 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:58.965 only one NIC for nvmf test 00:07:58.965 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:58.966 rmmod nvme_tcp 00:07:58.966 rmmod nvme_fabrics 00:07:58.966 rmmod nvme_keyring 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.966 11:26:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:01.507 00:08:01.507 real 0m4.564s 00:08:01.507 user 0m0.922s 00:08:01.507 sys 0m1.656s 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:01.507 ************************************ 00:08:01.507 END TEST nvmf_target_multipath 00:08:01.507 ************************************ 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:01.507 ************************************ 00:08:01.507 START TEST nvmf_zcopy 00:08:01.507 ************************************ 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:01.507 * Looking for test storage... 00:08:01.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:01.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.507 --rc genhtml_branch_coverage=1 00:08:01.507 --rc genhtml_function_coverage=1 00:08:01.507 --rc genhtml_legend=1 00:08:01.507 --rc geninfo_all_blocks=1 00:08:01.507 --rc geninfo_unexecuted_blocks=1 00:08:01.507 00:08:01.507 ' 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:01.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.507 --rc genhtml_branch_coverage=1 00:08:01.507 --rc genhtml_function_coverage=1 00:08:01.507 --rc genhtml_legend=1 00:08:01.507 --rc geninfo_all_blocks=1 00:08:01.507 --rc geninfo_unexecuted_blocks=1 00:08:01.507 00:08:01.507 ' 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:01.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.507 --rc genhtml_branch_coverage=1 00:08:01.507 --rc genhtml_function_coverage=1 00:08:01.507 --rc genhtml_legend=1 00:08:01.507 --rc geninfo_all_blocks=1 00:08:01.507 --rc geninfo_unexecuted_blocks=1 00:08:01.507 00:08:01.507 ' 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:01.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.507 --rc genhtml_branch_coverage=1 00:08:01.507 --rc genhtml_function_coverage=1 00:08:01.507 --rc genhtml_legend=1 00:08:01.507 --rc geninfo_all_blocks=1 00:08:01.507 --rc geninfo_unexecuted_blocks=1 00:08:01.507 00:08:01.507 ' 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.507 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:01.508 11:26:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:03.412 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:03.413 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:03.413 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:03.413 Found net devices under 0000:09:00.0: cvl_0_0 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:03.413 Found net devices under 0000:09:00.1: cvl_0_1 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.413 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.671 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.671 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.671 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:03.671 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.671 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.671 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.671 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:03.671 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:03.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:08:03.671 00:08:03.671 --- 10.0.0.2 ping statistics --- 00:08:03.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.671 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:08:03.671 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:08:03.671 00:08:03.671 --- 10.0.0.1 ping statistics --- 00:08:03.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.672 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2846348 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2846348 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2846348 ']' 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.672 11:26:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.672 [2024-11-15 11:26:44.016636] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:08:03.672 [2024-11-15 11:26:44.016737] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.672 [2024-11-15 11:26:44.095763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.930 [2024-11-15 11:26:44.152379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.930 [2024-11-15 11:26:44.152433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.930 [2024-11-15 11:26:44.152462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.930 [2024-11-15 11:26:44.152473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.930 [2024-11-15 11:26:44.152484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.930 [2024-11-15 11:26:44.153057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.930 [2024-11-15 11:26:44.297843] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.930 [2024-11-15 11:26:44.314045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.930 malloc0 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:03.930 { 00:08:03.930 "params": { 00:08:03.930 "name": "Nvme$subsystem", 00:08:03.930 "trtype": "$TEST_TRANSPORT", 00:08:03.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:03.930 "adrfam": "ipv4", 00:08:03.930 "trsvcid": "$NVMF_PORT", 00:08:03.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:03.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:03.930 "hdgst": ${hdgst:-false}, 00:08:03.930 "ddgst": ${ddgst:-false} 00:08:03.930 }, 00:08:03.930 "method": "bdev_nvme_attach_controller" 00:08:03.930 } 00:08:03.930 EOF 00:08:03.930 )") 00:08:03.930 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:04.188 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:04.188 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:04.188 11:26:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:04.188 "params": { 00:08:04.188 "name": "Nvme1", 00:08:04.188 "trtype": "tcp", 00:08:04.188 "traddr": "10.0.0.2", 00:08:04.189 "adrfam": "ipv4", 00:08:04.189 "trsvcid": "4420", 00:08:04.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:04.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:04.189 "hdgst": false, 00:08:04.189 "ddgst": false 00:08:04.189 }, 00:08:04.189 "method": "bdev_nvme_attach_controller" 00:08:04.189 }' 00:08:04.189 [2024-11-15 11:26:44.401606] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:08:04.189 [2024-11-15 11:26:44.401689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846374 ] 00:08:04.189 [2024-11-15 11:26:44.473829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.189 [2024-11-15 11:26:44.533678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.448 Running I/O for 10 seconds... 00:08:06.752 5807.00 IOPS, 45.37 MiB/s [2024-11-15T10:26:48.113Z] 5835.00 IOPS, 45.59 MiB/s [2024-11-15T10:26:49.046Z] 5836.00 IOPS, 45.59 MiB/s [2024-11-15T10:26:49.979Z] 5849.50 IOPS, 45.70 MiB/s [2024-11-15T10:26:50.912Z] 5858.20 IOPS, 45.77 MiB/s [2024-11-15T10:26:51.845Z] 5868.83 IOPS, 45.85 MiB/s [2024-11-15T10:26:53.217Z] 5874.29 IOPS, 45.89 MiB/s [2024-11-15T10:26:54.151Z] 5881.25 IOPS, 45.95 MiB/s [2024-11-15T10:26:55.084Z] 5881.89 IOPS, 45.95 MiB/s [2024-11-15T10:26:55.084Z] 5884.30 IOPS, 45.97 MiB/s 00:08:14.657 Latency(us) 00:08:14.657 [2024-11-15T10:26:55.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.657 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:14.657 Verification LBA range: start 0x0 length 0x1000 00:08:14.657 Nvme1n1 : 10.01 5884.56 45.97 0.00 0.00 21691.85 1001.24 30292.20 00:08:14.657 [2024-11-15T10:26:55.084Z] =================================================================================================================== 00:08:14.657 [2024-11-15T10:26:55.084Z] Total : 5884.56 45.97 0.00 0.00 21691.85 1001.24 30292.20 00:08:14.657 11:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2847690 00:08:14.657 11:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:14.657 11:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.657 11:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:14.657 11:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:14.657 11:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:14.657 11:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:14.657 11:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:14.657 11:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:14.657 { 00:08:14.657 "params": { 00:08:14.657 "name": "Nvme$subsystem", 00:08:14.657 "trtype": "$TEST_TRANSPORT", 00:08:14.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.657 "adrfam": "ipv4", 00:08:14.657 "trsvcid": "$NVMF_PORT", 00:08:14.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.657 "hdgst": ${hdgst:-false}, 00:08:14.657 "ddgst": ${ddgst:-false} 00:08:14.657 }, 00:08:14.657 "method": "bdev_nvme_attach_controller" 00:08:14.657 } 00:08:14.657 EOF 00:08:14.657 )") 00:08:14.657 11:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:14.657 11:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:14.657 [2024-11-15 11:26:55.030329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.657 [2024-11-15 11:26:55.030401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.657 11:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:14.657 11:26:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:14.657 "params": { 00:08:14.657 "name": "Nvme1", 00:08:14.657 "trtype": "tcp", 00:08:14.657 "traddr": "10.0.0.2", 00:08:14.657 "adrfam": "ipv4", 00:08:14.657 "trsvcid": "4420", 00:08:14.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.657 "hdgst": false, 00:08:14.657 "ddgst": false 00:08:14.657 }, 00:08:14.657 "method": "bdev_nvme_attach_controller" 00:08:14.657 }' 00:08:14.658 [2024-11-15 11:26:55.038225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.658 [2024-11-15 11:26:55.038247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.658 [2024-11-15 11:26:55.046253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.658 [2024-11-15 11:26:55.046274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.658 [2024-11-15 11:26:55.054261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.658 [2024-11-15 11:26:55.054281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.658 [2024-11-15 11:26:55.062312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.658 [2024-11-15 11:26:55.062335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.658 [2024-11-15 11:26:55.068251] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:08:14.658 [2024-11-15 11:26:55.068333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2847690 ] 00:08:14.658 [2024-11-15 11:26:55.070328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.658 [2024-11-15 11:26:55.070350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.658 [2024-11-15 11:26:55.078358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.658 [2024-11-15 11:26:55.078385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.916 [2024-11-15 11:26:55.086377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.916 [2024-11-15 11:26:55.086400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.916 [2024-11-15 11:26:55.094384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.916 [2024-11-15 11:26:55.094405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.916 [2024-11-15 11:26:55.102406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.916 [2024-11-15 11:26:55.102428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.916 [2024-11-15 11:26:55.110427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.916 [2024-11-15 11:26:55.110447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.916 [2024-11-15 11:26:55.118449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.916 [2024-11-15 11:26:55.118469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.916 [2024-11-15 11:26:55.126471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.916 [2024-11-15 11:26:55.126493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.916 [2024-11-15 11:26:55.134496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.916 [2024-11-15 11:26:55.134519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.916 [2024-11-15 11:26:55.139730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.916 [2024-11-15 11:26:55.142525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.916 [2024-11-15 11:26:55.142547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.916 [2024-11-15 11:26:55.154682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.154735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.162595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.162618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.170610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.170631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.178630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.178651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.186651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.186670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.194663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.194682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.202692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.202712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.204874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.917 [2024-11-15 11:26:55.210717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.210737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.218749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.218772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.226796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.226830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.238873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.238916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.250911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.250953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.262939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.262981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.274949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.274987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.282945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.282976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.295029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.295070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.307030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.307066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.314999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.315019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.323023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.323043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.331172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.331197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:14.917 [2024-11-15 11:26:55.339225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:14.917 [2024-11-15 11:26:55.339250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.347212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.347235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.355236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.355259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.363255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.363276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.371276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.371318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.379323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.379344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.387344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.387365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.395375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.395398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.403384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.403407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.411401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.411423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.419434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.419454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.427457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.427477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.435479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.435499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.443503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.443524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.451531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.451554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.459548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.459569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.467570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.467608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.475609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.475634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.483633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.483653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.491658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.491696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.499676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.499696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.507684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.507703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.515722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.515741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.523737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.523756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.531779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.531800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.539796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.539819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.547860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.175 [2024-11-15 11:26:55.547903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.175 [2024-11-15 11:26:55.555846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.176 [2024-11-15 11:26:55.555870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.176 Running I/O for 5 seconds... 00:08:15.176 [2024-11-15 11:26:55.563865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.176 [2024-11-15 11:26:55.563885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.176 [2024-11-15 11:26:55.578329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.176 [2024-11-15 11:26:55.578357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.176 [2024-11-15 11:26:55.589360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.176 [2024-11-15 11:26:55.589388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.602040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.602069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.612001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.612029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.623331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.623359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.635877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.635905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.646269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.646296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.656784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.656823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.667315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.667342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.678068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.678095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.688759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.688787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.701086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.701115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.710888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.710915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.722070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.722099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.734669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.734697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.746327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.746355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.755244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.755271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.767145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.767172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.779724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.779752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.789874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.789902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.800879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.800907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.811510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.811538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.822847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.822874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.833412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.833439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.844233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.844261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.434 [2024-11-15 11:26:55.854925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.434 [2024-11-15 11:26:55.854952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:55.865836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:55.865871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:55.878849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:55.878892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:55.889044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:55.889072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:55.899277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:55.899313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:55.910019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:55.910046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:55.922703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:55.922732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:55.933080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:55.933108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:55.943597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:55.943624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:55.954572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:55.954600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:55.965363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:55.965392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:55.978048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:55.978076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:55.987929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:55.987956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:55.998502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:55.998530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:56.008925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:56.008953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:56.021357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:56.021385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:56.031647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:56.031676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:56.042205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:56.042233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:56.052404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:56.052432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:56.063007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:56.063035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:56.072880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:56.072916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:56.083178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:56.083206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:56.093672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:56.093700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:56.106397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:56.106426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.693 [2024-11-15 11:26:56.116579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.693 [2024-11-15 11:26:56.116607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.951 [2024-11-15 11:26:56.127222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.951 [2024-11-15 11:26:56.127250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.951 [2024-11-15 11:26:56.139575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.951 [2024-11-15 11:26:56.139603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.951 [2024-11-15 11:26:56.149728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.951 [2024-11-15 11:26:56.149756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.951 [2024-11-15 11:26:56.160062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.951 [2024-11-15 11:26:56.160105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.951 [2024-11-15 11:26:56.170215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.951 [2024-11-15 11:26:56.170242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.951 [2024-11-15 11:26:56.180782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.951 [2024-11-15 11:26:56.180810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.951 [2024-11-15 11:26:56.191231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.951 [2024-11-15 11:26:56.191258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.951 [2024-11-15 11:26:56.201950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.951 [2024-11-15 11:26:56.201992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.951 [2024-11-15 11:26:56.212587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.951 [2024-11-15 11:26:56.212615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.951 [2024-11-15 11:26:56.223297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.951 [2024-11-15 11:26:56.223332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.951 [2024-11-15 11:26:56.234084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.951 [2024-11-15 11:26:56.234111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.951 [2024-11-15 11:26:56.244970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.951 [2024-11-15 11:26:56.244997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.951 [2024-11-15 11:26:56.257581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.951 [2024-11-15 11:26:56.257609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.951 [2024-11-15 11:26:56.267644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.951 [2024-11-15 11:26:56.267672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.951 [2024-11-15 11:26:56.278212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.951 [2024-11-15 11:26:56.278240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.951 [2024-11-15 11:26:56.288914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.952 [2024-11-15 11:26:56.288942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.952 [2024-11-15 11:26:56.299280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.952 [2024-11-15 11:26:56.299315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.952 [2024-11-15 11:26:56.310236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.952 [2024-11-15 11:26:56.310263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.952 [2024-11-15 11:26:56.322578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.952 [2024-11-15 11:26:56.322605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.952 [2024-11-15 11:26:56.331683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.952 [2024-11-15 11:26:56.331711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.952 [2024-11-15 11:26:56.345539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.952 [2024-11-15 11:26:56.345567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.952 [2024-11-15 11:26:56.355829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.952 [2024-11-15 11:26:56.355871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:15.952 [2024-11-15 11:26:56.366288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:15.952 [2024-11-15 11:26:56.366323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.376879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.376906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.387310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.387338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.398140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.398168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.410723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.410751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.420982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.421010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.431586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.431613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.442427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.442454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.452754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.452782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.463283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.463319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.473951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.473979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.486354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.486381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.495982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.496009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.508796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.508824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.518927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.518954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.529318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.529345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.539676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.539703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.549873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.549900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.560286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.560325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 11853.00 IOPS, 92.60 MiB/s [2024-11-15T10:26:56.637Z] [2024-11-15 11:26:56.571081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.571108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.581870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.581898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.594567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.594602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.604858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.604886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.615279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.615314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.210 [2024-11-15 11:26:56.625781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.210 [2024-11-15 11:26:56.625808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.636325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.636358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.649015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.649043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.659154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.659181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.669841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.669869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.680154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.680190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.690716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.690743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.701333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.701360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.711524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.711552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.722506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.722534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.733348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.733376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.743775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.743802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.754140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.754167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.764814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.764842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.775176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.775203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.785899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.785926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.796172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.796199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.806731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.806760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.817851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.817879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.828586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.828613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.841272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.841299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.851496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.851523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.862293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.862331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.875074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.875101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.470 [2024-11-15 11:26:56.885395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.470 [2024-11-15 11:26:56.885429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:56.896024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:56.896051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:56.908595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:56.908623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:56.918332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:56.918359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:56.928782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:56.928809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:56.939405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:56.939432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:56.952524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:56.952551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:56.962624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:56.962651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:56.973343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:56.973371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:56.983671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:56.983699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:56.994360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:56.994388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:57.004832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:57.004860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:57.015324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:57.015352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:57.025879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:57.025907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:57.036450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:57.036478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:57.047198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:57.047226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:57.057796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:57.057824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:57.069099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:57.069128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:57.079656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:57.079683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:57.093147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:57.093183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:57.103400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:57.103427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:57.113858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:57.113885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:57.124625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:57.124653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:57.135351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:57.135379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.729 [2024-11-15 11:26:57.148184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.729 [2024-11-15 11:26:57.148212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.159862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.159890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.168953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.168981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.180599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.180628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.191368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.191398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.201851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.201880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.212446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.212483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.223414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.223443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.235913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.235941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.246142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.246170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.256873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.256902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.267658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.267686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.278563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.278590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.291093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.291121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.300610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.300645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.311376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.311406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.322536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.322565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.335529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.335557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.345568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.345596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.356255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.356283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.369184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.369228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.379596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.379624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.390042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.390069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.400483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.400512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:16.988 [2024-11-15 11:26:57.411268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:16.988 [2024-11-15 11:26:57.411296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.422166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.422194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.433319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.433347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.446258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.446286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.456582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.456609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.467566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.467593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.480714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.480742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.490951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.490979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.501215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.501242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.512069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.512096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.524403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.524442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.534549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.534577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.545210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.545239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.556112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.556139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.566461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.566488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 11890.00 IOPS, 92.89 MiB/s [2024-11-15T10:26:57.694Z] [2024-11-15 11:26:57.577190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.577218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.588051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.588079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.600726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.600754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.610961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.610989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.621192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.621220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.631598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.631626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.642031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.642059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.652291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.652339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.662931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.662959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.675598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.675626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.267 [2024-11-15 11:26:57.685830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.267 [2024-11-15 11:26:57.685858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.696566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.696594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.710098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.710125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.720393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.720421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.730697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.730725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.740874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.740901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.751120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.751148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.761915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.761942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.774925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.774953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.785221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.785248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.796129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.796156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.806675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.806702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.817682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.817710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.830397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.830425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.840762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.840789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.851476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.851504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.862454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.862487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.873345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.873373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.884867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.884894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.896047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.896074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.908628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.908655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.918417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.918445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.929653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.929681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.525 [2024-11-15 11:26:57.942766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.525 [2024-11-15 11:26:57.942794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:57.954390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:57.954418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:57.963598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:57.963627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:57.975360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:57.975388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:57.987995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:57.988024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:57.998132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:57.998160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.008774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.008802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.021139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.021167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.031093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.031122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.041875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.041903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.052215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.052242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.062602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.062630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.073279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.073314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.086905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.086934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.097285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.097321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.107854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.107881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.118781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.118809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.129666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.129701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.141943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.141988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.152128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.152155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.162632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.162659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.173384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.173412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.183661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.183689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.194588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.194615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:17.784 [2024-11-15 11:26:58.205585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:17.784 [2024-11-15 11:26:58.205613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.216410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.216438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.228930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.228957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.238993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.239020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.249560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.249588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.260488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.260516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.271006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.271033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.284765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.284793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.294661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.294688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.305031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.305059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.315621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.315649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.326236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.326265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.336641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.336677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.347570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.347598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.358626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.358654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.369061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.369105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.380016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.380044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.393148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.393177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.043 [2024-11-15 11:26:58.403330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.043 [2024-11-15 11:26:58.403358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.044 [2024-11-15 11:26:58.413754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.044 [2024-11-15 11:26:58.413782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.044 [2024-11-15 11:26:58.424182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.044 [2024-11-15 11:26:58.424211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.044 [2024-11-15 11:26:58.434734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.044 [2024-11-15 11:26:58.434763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.044 [2024-11-15 11:26:58.445565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.044 [2024-11-15 11:26:58.445593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.044 [2024-11-15 11:26:58.456284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.044 [2024-11-15 11:26:58.456320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.044 [2024-11-15 11:26:58.466972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.044 [2024-11-15 11:26:58.466999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.477646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.477674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.488823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.488865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.501377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.501405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.512788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.512816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.521486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.521514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.533099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.533126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.545832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.545868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.557607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.557635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.566473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.566501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 11898.67 IOPS, 92.96 MiB/s [2024-11-15T10:26:58.730Z] [2024-11-15 11:26:58.578170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.578198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.588674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.588702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.599287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.599322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.609847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.609874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.620776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.620804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.631523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.631550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.642137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.642165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.655275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.655310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.665662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.665691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.676420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.676448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.687260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.687288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.698023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.698051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.710927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.710954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.303 [2024-11-15 11:26:58.722733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.303 [2024-11-15 11:26:58.722775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.561 [2024-11-15 11:26:58.731747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.561 [2024-11-15 11:26:58.731774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.561 [2024-11-15 11:26:58.743282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.561 [2024-11-15 11:26:58.743319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.561 [2024-11-15 11:26:58.754064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.561 [2024-11-15 11:26:58.754106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.561 [2024-11-15 11:26:58.764800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.561 [2024-11-15 11:26:58.764828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.561 [2024-11-15 11:26:58.775621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.561 [2024-11-15 11:26:58.775648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.561 [2024-11-15 11:26:58.786276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.561 [2024-11-15 11:26:58.786311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.561 [2024-11-15 11:26:58.797196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.561 [2024-11-15 11:26:58.797223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.561 [2024-11-15 11:26:58.807970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.561 [2024-11-15 11:26:58.807998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.561 [2024-11-15 11:26:58.818525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.562 [2024-11-15 11:26:58.818552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.562 [2024-11-15 11:26:58.829269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.562 [2024-11-15 11:26:58.829297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.562 [2024-11-15 11:26:58.841910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.562 [2024-11-15 11:26:58.841938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.562 [2024-11-15 11:26:58.851859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.562 [2024-11-15 11:26:58.851886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.562 [2024-11-15 11:26:58.862293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.562 [2024-11-15 11:26:58.862331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.562 [2024-11-15 11:26:58.872956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.562 [2024-11-15 11:26:58.872983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.562 [2024-11-15 11:26:58.883662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.562 [2024-11-15 11:26:58.883690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.562 [2024-11-15 11:26:58.894772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.562 [2024-11-15 11:26:58.894800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.562 [2024-11-15 11:26:58.905795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.562 [2024-11-15 11:26:58.905822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.562 [2024-11-15 11:26:58.918377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.562 [2024-11-15 11:26:58.918405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.562 [2024-11-15 11:26:58.928296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.562 [2024-11-15 11:26:58.928333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.562 [2024-11-15 11:26:58.939358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.562 [2024-11-15 11:26:58.939385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.562 [2024-11-15 11:26:58.950277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.562 [2024-11-15 11:26:58.950312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.562 [2024-11-15 11:26:58.960814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.562 [2024-11-15 11:26:58.960841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.562 [2024-11-15 11:26:58.971417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.562 [2024-11-15 11:26:58.971445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.562 [2024-11-15 11:26:58.982218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.562 [2024-11-15 11:26:58.982245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.820 [2024-11-15 11:26:58.994900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.820 [2024-11-15 11:26:58.994928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.820 [2024-11-15 11:26:59.006576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.820 [2024-11-15 11:26:59.006604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.820 [2024-11-15 11:26:59.016189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.820 [2024-11-15 11:26:59.016217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.820 [2024-11-15 11:26:59.026602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.820 [2024-11-15 11:26:59.026629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.820 [2024-11-15 11:26:59.037601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.820 [2024-11-15 11:26:59.037629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.820 [2024-11-15 11:26:59.050173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.820 [2024-11-15 11:26:59.050201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.820 [2024-11-15 11:26:59.060499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.820 [2024-11-15 11:26:59.060526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.820 [2024-11-15 11:26:59.070892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.820 [2024-11-15 11:26:59.070919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.820 [2024-11-15 11:26:59.081488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.820 [2024-11-15 11:26:59.081516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.821 [2024-11-15 11:26:59.092142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.821 [2024-11-15 11:26:59.092171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.821 [2024-11-15 11:26:59.102961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.821 [2024-11-15 11:26:59.102989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.821 [2024-11-15 11:26:59.115844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.821 [2024-11-15 11:26:59.115872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.821 [2024-11-15 11:26:59.125469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.821 [2024-11-15 11:26:59.125497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.821 [2024-11-15 11:26:59.136361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.821 [2024-11-15 11:26:59.136388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.821 [2024-11-15 11:26:59.148823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.821 [2024-11-15 11:26:59.148851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.821 [2024-11-15 11:26:59.158688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.821 [2024-11-15 11:26:59.158715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.821 [2024-11-15 11:26:59.169484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.821 [2024-11-15 11:26:59.169511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.821 [2024-11-15 11:26:59.179744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.821 [2024-11-15 11:26:59.179771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.821 [2024-11-15 11:26:59.190518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.821 [2024-11-15 11:26:59.190546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.821 [2024-11-15 11:26:59.203323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.821 [2024-11-15 11:26:59.203350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.821 [2024-11-15 11:26:59.214885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.821 [2024-11-15 11:26:59.214913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.821 [2024-11-15 11:26:59.223664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.821 [2024-11-15 11:26:59.223692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:18.821 [2024-11-15 11:26:59.235012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:18.821 [2024-11-15 11:26:59.235039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.247245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.247272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.257085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.257112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.268140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.268182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.282046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.282073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.292356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.292394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.303187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.303215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.313601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.313630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.324226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.324268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.336669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.336696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.346358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.346386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.356992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.357019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.367487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.367515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.380340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.380368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.390431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.390460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.400707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.400735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.410926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.410953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.421293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.421328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.431595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.431623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.441906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.441933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.452294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.452330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.462804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.462832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.473130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.473159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.483804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.483832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.080 [2024-11-15 11:26:59.494890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.080 [2024-11-15 11:26:59.494923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.338 [2024-11-15 11:26:59.507265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.338 [2024-11-15 11:26:59.507293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.338 [2024-11-15 11:26:59.517644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.338 [2024-11-15 11:26:59.517673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.338 [2024-11-15 11:26:59.528449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.338 [2024-11-15 11:26:59.528477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.338 [2024-11-15 11:26:59.539059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.338 [2024-11-15 11:26:59.539087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.338 [2024-11-15 11:26:59.550098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.338 [2024-11-15 11:26:59.550141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.338 [2024-11-15 11:26:59.562664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.338 [2024-11-15 11:26:59.562692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.338 [2024-11-15 11:26:59.572086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.338 [2024-11-15 11:26:59.572120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.338 11916.25 IOPS, 93.10 MiB/s [2024-11-15T10:26:59.765Z] [2024-11-15 11:26:59.584909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.338 [2024-11-15 11:26:59.584936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.338 [2024-11-15 11:26:59.596872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.338 [2024-11-15 11:26:59.596900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.338 [2024-11-15 11:26:59.605562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.338 [2024-11-15 11:26:59.605589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.338 [2024-11-15 11:26:59.616967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.338 [2024-11-15 11:26:59.616994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.338 [2024-11-15 11:26:59.629653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.338 [2024-11-15 11:26:59.629680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.339 [2024-11-15 11:26:59.639978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.339 [2024-11-15 11:26:59.640005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.339 [2024-11-15 11:26:59.650499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.339 [2024-11-15 11:26:59.650528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.339 [2024-11-15 11:26:59.661330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.339 [2024-11-15 11:26:59.661358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.339 [2024-11-15 11:26:59.671977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.339 [2024-11-15 11:26:59.672004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.339 [2024-11-15 11:26:59.682905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.339 [2024-11-15 11:26:59.682933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.339 [2024-11-15 11:26:59.693368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.339 [2024-11-15 11:26:59.693395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.339 [2024-11-15 11:26:59.703922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.339 [2024-11-15 11:26:59.703950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.339 [2024-11-15 11:26:59.714981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.339 [2024-11-15 11:26:59.715008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.339 [2024-11-15 11:26:59.728123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.339 [2024-11-15 11:26:59.728151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.339 [2024-11-15 11:26:59.738333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.339 [2024-11-15 11:26:59.738360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.339 [2024-11-15 11:26:59.748662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.339 [2024-11-15 11:26:59.748689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.339 [2024-11-15 11:26:59.759482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.339 [2024-11-15 11:26:59.759509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.770392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.770419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.782805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.782842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.792669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.792697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.803686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.803727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.815860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.815887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.824974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.825002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.836041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.836083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.846559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.846587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.856847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.856876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.867431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.867460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.877946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.877973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.888565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.888593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.899327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.899365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.912966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.913008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.923320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.923348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.933950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.933979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.946646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.946674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.958569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.958597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.967842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.967870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.978735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.978762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:26:59.991129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:26:59.991165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:27:00.002396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:27:00.002424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.597 [2024-11-15 11:27:00.011297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.597 [2024-11-15 11:27:00.011335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.022470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.022499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.035813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.035846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.046184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.046214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.056792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.056823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.069424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.069455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.080638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.080668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.091075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.091104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.101912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.101940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.112921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.112964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.124165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.124193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.135862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.135891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.146245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.146274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.156802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.156830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.167540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.167567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.178639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.178667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.189533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.189562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.200453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.200481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.211903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.211931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.222646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.222674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.232975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.233002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.248445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.248476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.258661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.258688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:19.855 [2024-11-15 11:27:00.269288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:19.855 [2024-11-15 11:27:00.269325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.111 [2024-11-15 11:27:00.281630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.111 [2024-11-15 11:27:00.281658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.111 [2024-11-15 11:27:00.291321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.111 [2024-11-15 11:27:00.291348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.111 [2024-11-15 11:27:00.302475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.302503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.314078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.314108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.325147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.325176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.335840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.335868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.346299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.346334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.356816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.356844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.367218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.367246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.378032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.378060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.388780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.388809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.399680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.399709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.412179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.412206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.421758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.421786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.432044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.432072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.442904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.442931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.455426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.455454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.465540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.465578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.475971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.476000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.486240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.486268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.496364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.496392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.507243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.507271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.519667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.519695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.112 [2024-11-15 11:27:00.529959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.112 [2024-11-15 11:27:00.529988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.370 [2024-11-15 11:27:00.540934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.370 [2024-11-15 11:27:00.540962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.370 [2024-11-15 11:27:00.553519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.370 [2024-11-15 11:27:00.553558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.370 [2024-11-15 11:27:00.563468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.370 [2024-11-15 11:27:00.563496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.370 [2024-11-15 11:27:00.574456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.370 [2024-11-15 11:27:00.574484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.370 11919.80 IOPS, 93.12 MiB/s [2024-11-15T10:27:00.797Z] [2024-11-15 11:27:00.583781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.370 [2024-11-15 11:27:00.583808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.370 00:08:20.370 Latency(us) 00:08:20.370 [2024-11-15T10:27:00.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.370 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:20.370 Nvme1n1 : 5.01 11921.54 93.14 0.00 0.00 10723.22 4708.88 22622.06 00:08:20.370 [2024-11-15T10:27:00.797Z] =================================================================================================================== 00:08:20.370 [2024-11-15T10:27:00.797Z] Total : 11921.54 93.14 0.00 0.00 10723.22 4708.88 22622.06 00:08:20.370 [2024-11-15 11:27:00.590692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.370 [2024-11-15 11:27:00.590715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.370 [2024-11-15 11:27:00.598680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.370 [2024-11-15 11:27:00.598707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.370 [2024-11-15 11:27:00.606698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.370 [2024-11-15 11:27:00.606722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.370 [2024-11-15 11:27:00.614776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.370 [2024-11-15 11:27:00.614821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.370 [2024-11-15 11:27:00.622797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.622844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.630816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.630862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.638838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.638884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.646857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.646902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.654883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.654927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.662897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.662939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.670922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.670966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.678945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.679002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.686972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.687029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.695006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.695055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.703012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.703057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.711033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.711078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.719052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.719098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.727049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.727098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.735032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.735052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.743052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.743072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.751076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.751096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.759096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.759115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.767166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.767205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.775199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.775242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.783222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.783265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.371 [2024-11-15 11:27:00.791204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.371 [2024-11-15 11:27:00.791225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.628 [2024-11-15 11:27:00.799221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.628 [2024-11-15 11:27:00.799241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.628 [2024-11-15 11:27:00.807224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.628 [2024-11-15 11:27:00.807243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2847690) - No such process 00:08:20.628 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2847690 00:08:20.628 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.628 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.628 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.628 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.628 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:20.628 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.628 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.628 delay0 00:08:20.628 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.628 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:20.628 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.628 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.628 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.628 11:27:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:20.628 [2024-11-15 11:27:00.932099] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:27.181 [2024-11-15 11:27:07.158999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21834b0 is same with the state(6) to be set 00:08:27.181 Initializing NVMe Controllers 00:08:27.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:27.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:27.181 Initialization complete. Launching workers. 00:08:27.181 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 122 00:08:27.181 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 409, failed to submit 33 00:08:27.181 success 231, unsuccessful 178, failed 0 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.181 rmmod nvme_tcp 00:08:27.181 rmmod nvme_fabrics 00:08:27.181 rmmod nvme_keyring 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2846348 ']' 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2846348 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2846348 ']' 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2846348 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2846348 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2846348' 00:08:27.181 killing process with pid 2846348 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2846348 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2846348 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.181 11:27:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.134 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:29.134 00:08:29.134 real 0m28.106s 00:08:29.134 user 0m41.857s 00:08:29.134 sys 0m8.337s 00:08:29.134 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.134 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.134 ************************************ 00:08:29.134 END TEST nvmf_zcopy 00:08:29.134 ************************************ 00:08:29.393 11:27:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:29.393 11:27:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:29.393 11:27:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.393 11:27:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.393 ************************************ 00:08:29.393 START TEST nvmf_nmic 00:08:29.393 ************************************ 00:08:29.393 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:29.394 * Looking for test storage... 00:08:29.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:29.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.394 --rc genhtml_branch_coverage=1 00:08:29.394 --rc genhtml_function_coverage=1 00:08:29.394 --rc genhtml_legend=1 00:08:29.394 --rc geninfo_all_blocks=1 00:08:29.394 --rc geninfo_unexecuted_blocks=1 00:08:29.394 00:08:29.394 ' 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:29.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.394 --rc genhtml_branch_coverage=1 00:08:29.394 --rc genhtml_function_coverage=1 00:08:29.394 --rc genhtml_legend=1 00:08:29.394 --rc geninfo_all_blocks=1 00:08:29.394 --rc geninfo_unexecuted_blocks=1 00:08:29.394 00:08:29.394 ' 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:29.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.394 --rc genhtml_branch_coverage=1 00:08:29.394 --rc genhtml_function_coverage=1 00:08:29.394 --rc genhtml_legend=1 00:08:29.394 --rc geninfo_all_blocks=1 00:08:29.394 --rc geninfo_unexecuted_blocks=1 00:08:29.394 00:08:29.394 ' 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:29.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.394 --rc genhtml_branch_coverage=1 00:08:29.394 --rc genhtml_function_coverage=1 00:08:29.394 --rc genhtml_legend=1 00:08:29.394 --rc geninfo_all_blocks=1 00:08:29.394 --rc geninfo_unexecuted_blocks=1 00:08:29.394 00:08:29.394 ' 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.394 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:29.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:29.395 11:27:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:31.929 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:31.929 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:31.929 Found net devices under 0000:09:00.0: cvl_0_0 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.929 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:31.930 Found net devices under 0000:09:00.1: cvl_0_1 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:31.930 11:27:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:31.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:08:31.930 00:08:31.930 --- 10.0.0.2 ping statistics --- 00:08:31.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.930 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:08:31.930 00:08:31.930 --- 10.0.0.1 ping statistics --- 00:08:31.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.930 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2851687 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2851687 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2851687 ']' 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.930 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:31.930 [2024-11-15 11:27:12.115325] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:08:31.930 [2024-11-15 11:27:12.115409] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.930 [2024-11-15 11:27:12.182722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.930 [2024-11-15 11:27:12.239796] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.930 [2024-11-15 11:27:12.239847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.930 [2024-11-15 11:27:12.239876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.930 [2024-11-15 11:27:12.239887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.930 [2024-11-15 11:27:12.239897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.930 [2024-11-15 11:27:12.241493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.930 [2024-11-15 11:27:12.241554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.930 [2024-11-15 11:27:12.241602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.930 [2024-11-15 11:27:12.241605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.188 [2024-11-15 11:27:12.423481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.188 Malloc0 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.188 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.189 [2024-11-15 11:27:12.494969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:32.189 test case1: single bdev can't be used in multiple subsystems 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.189 [2024-11-15 11:27:12.518819] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:32.189 [2024-11-15 11:27:12.518849] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:32.189 [2024-11-15 11:27:12.518886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.189 request: 00:08:32.189 { 00:08:32.189 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:32.189 "namespace": { 00:08:32.189 "bdev_name": "Malloc0", 00:08:32.189 "no_auto_visible": false 00:08:32.189 }, 00:08:32.189 "method": "nvmf_subsystem_add_ns", 00:08:32.189 "req_id": 1 00:08:32.189 } 00:08:32.189 Got JSON-RPC error response 00:08:32.189 response: 00:08:32.189 { 00:08:32.189 "code": -32602, 00:08:32.189 "message": "Invalid parameters" 00:08:32.189 } 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:32.189 Adding namespace failed - expected result. 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:32.189 test case2: host connect to nvmf target in multiple paths 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.189 [2024-11-15 11:27:12.526937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.189 11:27:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:33.122 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:33.687 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:33.687 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:33.687 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:33.687 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:33.687 11:27:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:35.589 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:35.589 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:35.589 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:35.589 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:35.589 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:35.589 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:35.589 11:27:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:35.589 [global] 00:08:35.589 thread=1 00:08:35.589 invalidate=1 00:08:35.589 rw=write 00:08:35.589 time_based=1 00:08:35.589 runtime=1 00:08:35.589 ioengine=libaio 00:08:35.589 direct=1 00:08:35.589 bs=4096 00:08:35.589 iodepth=1 00:08:35.589 norandommap=0 00:08:35.589 numjobs=1 00:08:35.589 00:08:35.589 verify_dump=1 00:08:35.589 verify_backlog=512 00:08:35.589 verify_state_save=0 00:08:35.589 do_verify=1 00:08:35.589 verify=crc32c-intel 00:08:35.589 [job0] 00:08:35.589 filename=/dev/nvme0n1 00:08:35.589 Could not set queue depth (nvme0n1) 00:08:35.847 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:35.847 fio-3.35 00:08:35.847 Starting 1 thread 00:08:37.219 00:08:37.219 job0: (groupid=0, jobs=1): err= 0: pid=2852235: Fri Nov 15 11:27:17 2024 00:08:37.219 read: IOPS=22, BW=88.7KiB/s (90.8kB/s)(92.0KiB/1037msec) 00:08:37.219 slat (nsec): min=12730, max=33395, avg=24008.61, stdev=9069.66 00:08:37.219 clat (usec): min=40302, max=42007, avg=41417.50, stdev=563.74 00:08:37.219 lat (usec): min=40317, max=42025, avg=41441.51, stdev=563.60 00:08:37.219 clat percentiles (usec): 00:08:37.219 | 1.00th=[40109], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:08:37.219 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:08:37.219 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:08:37.219 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:37.219 | 99.99th=[42206] 00:08:37.219 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:08:37.219 slat (nsec): min=6445, max=45824, avg=14040.56, stdev=5878.91 00:08:37.219 clat (usec): min=122, max=276, avg=145.31, stdev=14.52 00:08:37.219 lat (usec): min=130, max=322, avg=159.35, stdev=16.14 00:08:37.219 clat percentiles (usec): 00:08:37.219 | 1.00th=[ 125], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:08:37.219 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 147], 00:08:37.219 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 163], 00:08:37.219 | 99.00th=[ 219], 99.50th=[ 241], 99.90th=[ 277], 99.95th=[ 277], 00:08:37.219 | 99.99th=[ 277] 00:08:37.219 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:37.219 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:37.219 lat (usec) : 250=95.51%, 500=0.19% 00:08:37.219 lat (msec) : 50=4.30% 00:08:37.219 cpu : usr=0.39%, sys=0.68%, ctx=535, majf=0, minf=1 00:08:37.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:37.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:37.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:37.219 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:37.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:37.219 00:08:37.219 Run status group 0 (all jobs): 00:08:37.220 READ: bw=88.7KiB/s (90.8kB/s), 88.7KiB/s-88.7KiB/s (90.8kB/s-90.8kB/s), io=92.0KiB (94.2kB), run=1037-1037msec 00:08:37.220 WRITE: bw=1975KiB/s (2022kB/s), 1975KiB/s-1975KiB/s (2022kB/s-2022kB/s), io=2048KiB (2097kB), run=1037-1037msec 00:08:37.220 00:08:37.220 Disk stats (read/write): 00:08:37.220 nvme0n1: ios=69/512, merge=0/0, ticks=800/66, in_queue=866, util=91.38% 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:37.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:37.220 rmmod nvme_tcp 00:08:37.220 rmmod nvme_fabrics 00:08:37.220 rmmod nvme_keyring 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2851687 ']' 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2851687 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2851687 ']' 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2851687 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2851687 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2851687' 00:08:37.220 killing process with pid 2851687 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2851687 00:08:37.220 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2851687 00:08:37.478 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:37.478 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:37.478 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:37.478 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:37.478 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:37.478 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:37.478 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:37.478 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:37.478 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:37.478 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.478 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.478 11:27:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.015 11:27:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:40.015 00:08:40.015 real 0m10.240s 00:08:40.015 user 0m23.223s 00:08:40.015 sys 0m2.471s 00:08:40.015 11:27:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.015 11:27:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.015 ************************************ 00:08:40.015 END TEST nvmf_nmic 00:08:40.015 ************************************ 00:08:40.015 11:27:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:40.015 11:27:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:40.015 11:27:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.015 11:27:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:40.015 ************************************ 00:08:40.015 START TEST nvmf_fio_target 00:08:40.015 ************************************ 00:08:40.015 11:27:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:40.015 * Looking for test storage... 00:08:40.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.015 11:27:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:40.015 11:27:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:40.015 11:27:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:40.015 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:40.015 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.015 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.015 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.015 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.015 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.015 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.015 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.015 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.015 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.015 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.015 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.015 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:40.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.016 --rc genhtml_branch_coverage=1 00:08:40.016 --rc genhtml_function_coverage=1 00:08:40.016 --rc genhtml_legend=1 00:08:40.016 --rc geninfo_all_blocks=1 00:08:40.016 --rc geninfo_unexecuted_blocks=1 00:08:40.016 00:08:40.016 ' 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:40.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.016 --rc genhtml_branch_coverage=1 00:08:40.016 --rc genhtml_function_coverage=1 00:08:40.016 --rc genhtml_legend=1 00:08:40.016 --rc geninfo_all_blocks=1 00:08:40.016 --rc geninfo_unexecuted_blocks=1 00:08:40.016 00:08:40.016 ' 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:40.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.016 --rc genhtml_branch_coverage=1 00:08:40.016 --rc genhtml_function_coverage=1 00:08:40.016 --rc genhtml_legend=1 00:08:40.016 --rc geninfo_all_blocks=1 00:08:40.016 --rc geninfo_unexecuted_blocks=1 00:08:40.016 00:08:40.016 ' 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:40.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.016 --rc genhtml_branch_coverage=1 00:08:40.016 --rc genhtml_function_coverage=1 00:08:40.016 --rc genhtml_legend=1 00:08:40.016 --rc geninfo_all_blocks=1 00:08:40.016 --rc geninfo_unexecuted_blocks=1 00:08:40.016 00:08:40.016 ' 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:40.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:40.016 11:27:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:41.921 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:41.921 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:41.921 Found net devices under 0000:09:00.0: cvl_0_0 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:41.921 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:41.922 Found net devices under 0000:09:00.1: cvl_0_1 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:41.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:08:41.922 00:08:41.922 --- 10.0.0.2 ping statistics --- 00:08:41.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.922 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:08:41.922 00:08:41.922 --- 10.0.0.1 ping statistics --- 00:08:41.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.922 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:41.922 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:42.180 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2854442 00:08:42.180 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:42.180 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2854442 00:08:42.180 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2854442 ']' 00:08:42.180 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.180 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.180 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.180 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.180 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:42.180 [2024-11-15 11:27:22.400865] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:08:42.180 [2024-11-15 11:27:22.400951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.180 [2024-11-15 11:27:22.473412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.180 [2024-11-15 11:27:22.534924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.180 [2024-11-15 11:27:22.534973] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.180 [2024-11-15 11:27:22.535002] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.180 [2024-11-15 11:27:22.535014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.180 [2024-11-15 11:27:22.535024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.180 [2024-11-15 11:27:22.536715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.180 [2024-11-15 11:27:22.536783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.180 [2024-11-15 11:27:22.536850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.180 [2024-11-15 11:27:22.536854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.439 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.439 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:42.439 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:42.439 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:42.439 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:42.439 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.439 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:42.697 [2024-11-15 11:27:22.926081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.697 11:27:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:42.955 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:42.955 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:43.213 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:43.213 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:43.471 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:43.471 11:27:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:43.729 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:43.729 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:43.987 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:44.552 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:44.552 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:44.552 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:44.552 11:27:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:45.118 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:45.118 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:45.118 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:45.683 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:45.683 11:27:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:45.683 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:45.683 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:45.940 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.197 [2024-11-15 11:27:26.564735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.197 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:46.455 11:27:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:46.712 11:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:47.645 11:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:47.645 11:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:47.646 11:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:47.646 11:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:47.646 11:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:47.646 11:27:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:49.545 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:49.545 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:49.545 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:49.545 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:49.545 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:49.545 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:49.545 11:27:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:49.545 [global] 00:08:49.545 thread=1 00:08:49.545 invalidate=1 00:08:49.545 rw=write 00:08:49.545 time_based=1 00:08:49.545 runtime=1 00:08:49.545 ioengine=libaio 00:08:49.545 direct=1 00:08:49.545 bs=4096 00:08:49.545 iodepth=1 00:08:49.545 norandommap=0 00:08:49.545 numjobs=1 00:08:49.545 00:08:49.545 verify_dump=1 00:08:49.545 verify_backlog=512 00:08:49.545 verify_state_save=0 00:08:49.545 do_verify=1 00:08:49.545 verify=crc32c-intel 00:08:49.545 [job0] 00:08:49.545 filename=/dev/nvme0n1 00:08:49.545 [job1] 00:08:49.545 filename=/dev/nvme0n2 00:08:49.545 [job2] 00:08:49.545 filename=/dev/nvme0n3 00:08:49.545 [job3] 00:08:49.545 filename=/dev/nvme0n4 00:08:49.545 Could not set queue depth (nvme0n1) 00:08:49.545 Could not set queue depth (nvme0n2) 00:08:49.545 Could not set queue depth (nvme0n3) 00:08:49.545 Could not set queue depth (nvme0n4) 00:08:49.803 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.803 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.803 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.803 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.803 fio-3.35 00:08:49.803 Starting 4 threads 00:08:51.176 00:08:51.176 job0: (groupid=0, jobs=1): err= 0: pid=2855503: Fri Nov 15 11:27:31 2024 00:08:51.176 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:08:51.176 slat (nsec): min=5465, max=46631, avg=13444.93, stdev=5307.64 00:08:51.176 clat (usec): min=188, max=539, avg=243.48, stdev=29.33 00:08:51.176 lat (usec): min=194, max=556, avg=256.92, stdev=31.30 00:08:51.176 clat percentiles (usec): 00:08:51.176 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 227], 00:08:51.176 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:08:51.176 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:08:51.176 | 99.00th=[ 367], 99.50th=[ 465], 99.90th=[ 515], 99.95th=[ 529], 00:08:51.176 | 99.99th=[ 537] 00:08:51.176 write: IOPS=2098, BW=8396KiB/s (8597kB/s)(8404KiB/1001msec); 0 zone resets 00:08:51.176 slat (nsec): min=7275, max=67247, avg=18025.12, stdev=6520.81 00:08:51.176 clat (usec): min=144, max=330, avg=197.99, stdev=23.75 00:08:51.176 lat (usec): min=152, max=348, avg=216.02, stdev=25.24 00:08:51.176 clat percentiles (usec): 00:08:51.176 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 182], 00:08:51.176 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 200], 00:08:51.176 | 70.00th=[ 206], 80.00th=[ 219], 90.00th=[ 233], 95.00th=[ 241], 00:08:51.176 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 285], 99.95th=[ 322], 00:08:51.176 | 99.99th=[ 330] 00:08:51.176 bw ( KiB/s): min= 8192, max= 8192, per=40.18%, avg=8192.00, stdev= 0.00, samples=1 00:08:51.176 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:51.176 lat (usec) : 250=82.96%, 500=16.97%, 750=0.07% 00:08:51.176 cpu : usr=5.20%, sys=8.70%, ctx=4149, majf=0, minf=2 00:08:51.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:51.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.176 issued rwts: total=2048,2101,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:51.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:51.176 job1: (groupid=0, jobs=1): err= 0: pid=2855516: Fri Nov 15 11:27:31 2024 00:08:51.176 read: IOPS=297, BW=1191KiB/s (1219kB/s)(1192KiB/1001msec) 00:08:51.176 slat (nsec): min=8349, max=44412, avg=10839.02, stdev=5067.39 00:08:51.176 clat (usec): min=222, max=42046, avg=2829.30, stdev=9934.85 00:08:51.176 lat (usec): min=232, max=42060, avg=2840.14, stdev=9938.06 00:08:51.176 clat percentiles (usec): 00:08:51.176 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 306], 00:08:51.176 | 30.00th=[ 310], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 318], 00:08:51.176 | 70.00th=[ 322], 80.00th=[ 326], 90.00th=[ 347], 95.00th=[41681], 00:08:51.176 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:51.176 | 99.99th=[42206] 00:08:51.176 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:08:51.176 slat (usec): min=8, max=19287, avg=55.04, stdev=851.65 00:08:51.176 clat (usec): min=147, max=462, avg=238.81, stdev=38.47 00:08:51.176 lat (usec): min=156, max=19704, avg=293.86, stdev=860.29 00:08:51.176 clat percentiles (usec): 00:08:51.176 | 1.00th=[ 163], 5.00th=[ 190], 10.00th=[ 198], 20.00th=[ 210], 00:08:51.176 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 245], 00:08:51.176 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 289], 00:08:51.176 | 99.00th=[ 412], 99.50th=[ 429], 99.90th=[ 461], 99.95th=[ 461], 00:08:51.176 | 99.99th=[ 461] 00:08:51.176 bw ( KiB/s): min= 4096, max= 4096, per=20.09%, avg=4096.00, stdev= 0.00, samples=1 00:08:51.176 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:51.176 lat (usec) : 250=42.59%, 500=55.19% 00:08:51.176 lat (msec) : 50=2.22% 00:08:51.176 cpu : usr=0.70%, sys=1.60%, ctx=812, majf=0, minf=1 00:08:51.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:51.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.176 issued rwts: total=298,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:51.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:51.176 job2: (groupid=0, jobs=1): err= 0: pid=2855519: Fri Nov 15 11:27:31 2024 00:08:51.176 read: IOPS=21, BW=86.7KiB/s (88.8kB/s)(88.0KiB/1015msec) 00:08:51.176 slat (nsec): min=13629, max=34568, avg=22828.32, stdev=9888.59 00:08:51.176 clat (usec): min=40905, max=41286, avg=40983.09, stdev=74.11 00:08:51.176 lat (usec): min=40939, max=41302, avg=41005.92, stdev=70.52 00:08:51.176 clat percentiles (usec): 00:08:51.176 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:08:51.176 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:51.176 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:51.176 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:51.176 | 99.99th=[41157] 00:08:51.176 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:08:51.176 slat (nsec): min=6140, max=33982, avg=10628.98, stdev=4761.39 00:08:51.176 clat (usec): min=139, max=605, avg=205.89, stdev=36.70 00:08:51.176 lat (usec): min=149, max=612, avg=216.52, stdev=36.61 00:08:51.176 clat percentiles (usec): 00:08:51.176 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 180], 00:08:51.176 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 210], 00:08:51.176 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 241], 95.00th=[ 249], 00:08:51.176 | 99.00th=[ 334], 99.50th=[ 408], 99.90th=[ 603], 99.95th=[ 603], 00:08:51.176 | 99.99th=[ 603] 00:08:51.176 bw ( KiB/s): min= 4096, max= 4096, per=20.09%, avg=4096.00, stdev= 0.00, samples=1 00:08:51.176 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:51.176 lat (usec) : 250=92.13%, 500=3.56%, 750=0.19% 00:08:51.176 lat (msec) : 50=4.12% 00:08:51.176 cpu : usr=0.39%, sys=0.30%, ctx=536, majf=0, minf=1 00:08:51.176 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:51.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.176 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:51.176 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:51.176 job3: (groupid=0, jobs=1): err= 0: pid=2855520: Fri Nov 15 11:27:31 2024 00:08:51.176 read: IOPS=1970, BW=7880KiB/s (8069kB/s)(7888KiB/1001msec) 00:08:51.176 slat (nsec): min=5926, max=51026, avg=14477.05, stdev=5586.98 00:08:51.176 clat (usec): min=189, max=490, avg=255.96, stdev=42.19 00:08:51.176 lat (usec): min=196, max=508, avg=270.44, stdev=42.38 00:08:51.176 clat percentiles (usec): 00:08:51.176 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 221], 20.00th=[ 229], 00:08:51.176 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:08:51.176 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 314], 95.00th=[ 330], 00:08:51.176 | 99.00th=[ 433], 99.50th=[ 441], 99.90th=[ 486], 99.95th=[ 490], 00:08:51.176 | 99.99th=[ 490] 00:08:51.176 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:51.176 slat (nsec): min=7777, max=62105, avg=19276.23, stdev=6846.79 00:08:51.176 clat (usec): min=145, max=439, avg=199.12, stdev=29.74 00:08:51.176 lat (usec): min=155, max=502, avg=218.40, stdev=29.58 00:08:51.176 clat percentiles (usec): 00:08:51.176 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 182], 00:08:51.176 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:08:51.176 | 70.00th=[ 202], 80.00th=[ 215], 90.00th=[ 245], 95.00th=[ 260], 00:08:51.176 | 99.00th=[ 293], 99.50th=[ 322], 99.90th=[ 379], 99.95th=[ 433], 00:08:51.176 | 99.99th=[ 441] 00:08:51.176 bw ( KiB/s): min= 8192, max= 8192, per=40.18%, avg=8192.00, stdev= 0.00, samples=1 00:08:51.177 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:51.177 lat (usec) : 250=76.47%, 500=23.53% 00:08:51.177 cpu : usr=6.00%, sys=8.00%, ctx=4023, majf=0, minf=1 00:08:51.177 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:51.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.177 issued rwts: total=1972,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:51.177 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:51.177 00:08:51.177 Run status group 0 (all jobs): 00:08:51.177 READ: bw=16.7MiB/s (17.5MB/s), 86.7KiB/s-8184KiB/s (88.8kB/s-8380kB/s), io=17.0MiB (17.8MB), run=1001-1015msec 00:08:51.177 WRITE: bw=19.9MiB/s (20.9MB/s), 2018KiB/s-8396KiB/s (2066kB/s-8597kB/s), io=20.2MiB (21.2MB), run=1001-1015msec 00:08:51.177 00:08:51.177 Disk stats (read/write): 00:08:51.177 nvme0n1: ios=1586/1948, merge=0/0, ticks=373/371, in_queue=744, util=86.97% 00:08:51.177 nvme0n2: ios=58/512, merge=0/0, ticks=803/104, in_queue=907, util=90.13% 00:08:51.177 nvme0n3: ios=40/512, merge=0/0, ticks=1605/95, in_queue=1700, util=93.41% 00:08:51.177 nvme0n4: ios=1562/1938, merge=0/0, ticks=1249/366, in_queue=1615, util=94.20% 00:08:51.177 11:27:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:51.177 [global] 00:08:51.177 thread=1 00:08:51.177 invalidate=1 00:08:51.177 rw=randwrite 00:08:51.177 time_based=1 00:08:51.177 runtime=1 00:08:51.177 ioengine=libaio 00:08:51.177 direct=1 00:08:51.177 bs=4096 00:08:51.177 iodepth=1 00:08:51.177 norandommap=0 00:08:51.177 numjobs=1 00:08:51.177 00:08:51.177 verify_dump=1 00:08:51.177 verify_backlog=512 00:08:51.177 verify_state_save=0 00:08:51.177 do_verify=1 00:08:51.177 verify=crc32c-intel 00:08:51.177 [job0] 00:08:51.177 filename=/dev/nvme0n1 00:08:51.177 [job1] 00:08:51.177 filename=/dev/nvme0n2 00:08:51.177 [job2] 00:08:51.177 filename=/dev/nvme0n3 00:08:51.177 [job3] 00:08:51.177 filename=/dev/nvme0n4 00:08:51.177 Could not set queue depth (nvme0n1) 00:08:51.177 Could not set queue depth (nvme0n2) 00:08:51.177 Could not set queue depth (nvme0n3) 00:08:51.177 Could not set queue depth (nvme0n4) 00:08:51.177 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:51.177 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:51.177 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:51.177 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:51.177 fio-3.35 00:08:51.177 Starting 4 threads 00:08:52.551 00:08:52.551 job0: (groupid=0, jobs=1): err= 0: pid=2855756: Fri Nov 15 11:27:32 2024 00:08:52.551 read: IOPS=1155, BW=4623KiB/s (4734kB/s)(4628KiB/1001msec) 00:08:52.551 slat (nsec): min=4927, max=52348, avg=7798.08, stdev=3697.81 00:08:52.551 clat (usec): min=161, max=42310, avg=605.98, stdev=4011.53 00:08:52.551 lat (usec): min=167, max=42329, avg=613.78, stdev=4013.09 00:08:52.551 clat percentiles (usec): 00:08:52.551 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:08:52.551 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 196], 60.00th=[ 233], 00:08:52.551 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 265], 00:08:52.551 | 99.00th=[ 906], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:08:52.551 | 99.99th=[42206] 00:08:52.551 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:08:52.551 slat (nsec): min=6643, max=53413, avg=11789.58, stdev=6668.48 00:08:52.551 clat (usec): min=117, max=618, avg=172.16, stdev=57.57 00:08:52.551 lat (usec): min=124, max=627, avg=183.95, stdev=61.28 00:08:52.551 clat percentiles (usec): 00:08:52.551 | 1.00th=[ 121], 5.00th=[ 124], 10.00th=[ 126], 20.00th=[ 129], 00:08:52.551 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 143], 60.00th=[ 155], 00:08:52.551 | 70.00th=[ 204], 80.00th=[ 223], 90.00th=[ 247], 95.00th=[ 269], 00:08:52.551 | 99.00th=[ 375], 99.50th=[ 416], 99.90th=[ 523], 99.95th=[ 619], 00:08:52.551 | 99.99th=[ 619] 00:08:52.551 bw ( KiB/s): min= 4096, max= 4096, per=20.44%, avg=4096.00, stdev= 0.00, samples=1 00:08:52.551 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:52.551 lat (usec) : 250=87.71%, 500=11.70%, 750=0.15%, 1000=0.04% 00:08:52.551 lat (msec) : 50=0.41% 00:08:52.551 cpu : usr=2.00%, sys=2.80%, ctx=2695, majf=0, minf=1 00:08:52.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:52.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.551 issued rwts: total=1157,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:52.551 job1: (groupid=0, jobs=1): err= 0: pid=2855757: Fri Nov 15 11:27:32 2024 00:08:52.551 read: IOPS=1036, BW=4147KiB/s (4246kB/s)(4184KiB/1009msec) 00:08:52.551 slat (nsec): min=6060, max=60932, avg=15001.63, stdev=5414.99 00:08:52.551 clat (usec): min=187, max=41770, avg=624.13, stdev=3960.11 00:08:52.551 lat (usec): min=194, max=41779, avg=639.13, stdev=3959.71 00:08:52.551 clat percentiles (usec): 00:08:52.551 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 221], 00:08:52.551 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 235], 00:08:52.551 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 269], 95.00th=[ 293], 00:08:52.551 | 99.00th=[ 334], 99.50th=[40633], 99.90th=[41157], 99.95th=[41681], 00:08:52.551 | 99.99th=[41681] 00:08:52.551 write: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec); 0 zone resets 00:08:52.551 slat (nsec): min=6088, max=69983, avg=18149.67, stdev=7243.96 00:08:52.551 clat (usec): min=139, max=4005, avg=195.08, stdev=101.86 00:08:52.551 lat (usec): min=149, max=4020, avg=213.23, stdev=101.71 00:08:52.551 clat percentiles (usec): 00:08:52.551 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 172], 00:08:52.551 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 192], 00:08:52.551 | 70.00th=[ 202], 80.00th=[ 217], 90.00th=[ 231], 95.00th=[ 241], 00:08:52.551 | 99.00th=[ 265], 99.50th=[ 310], 99.90th=[ 510], 99.95th=[ 4015], 00:08:52.552 | 99.99th=[ 4015] 00:08:52.552 bw ( KiB/s): min= 3792, max= 8496, per=30.66%, avg=6144.00, stdev=3326.23, samples=2 00:08:52.552 iops : min= 948, max= 2124, avg=1536.00, stdev=831.56, samples=2 00:08:52.552 lat (usec) : 250=91.98%, 500=7.55%, 750=0.04% 00:08:52.552 lat (msec) : 10=0.04%, 50=0.39% 00:08:52.552 cpu : usr=2.78%, sys=6.15%, ctx=2583, majf=0, minf=1 00:08:52.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:52.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.552 issued rwts: total=1046,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:52.552 job2: (groupid=0, jobs=1): err= 0: pid=2855758: Fri Nov 15 11:27:32 2024 00:08:52.552 read: IOPS=56, BW=227KiB/s (232kB/s)(232KiB/1022msec) 00:08:52.552 slat (nsec): min=6132, max=33893, avg=14831.05, stdev=6844.78 00:08:52.552 clat (usec): min=254, max=41989, avg=15260.01, stdev=19765.31 00:08:52.552 lat (usec): min=270, max=42004, avg=15274.84, stdev=19766.51 00:08:52.552 clat percentiles (usec): 00:08:52.552 | 1.00th=[ 255], 5.00th=[ 310], 10.00th=[ 338], 20.00th=[ 371], 00:08:52.552 | 30.00th=[ 383], 40.00th=[ 400], 50.00th=[ 420], 60.00th=[ 474], 00:08:52.552 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:08:52.552 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:52.552 | 99.99th=[42206] 00:08:52.552 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:08:52.552 slat (nsec): min=7437, max=49426, avg=17479.44, stdev=6635.44 00:08:52.552 clat (usec): min=176, max=614, avg=242.30, stdev=49.16 00:08:52.552 lat (usec): min=192, max=624, avg=259.78, stdev=48.20 00:08:52.552 clat percentiles (usec): 00:08:52.552 | 1.00th=[ 190], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:08:52.552 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 239], 00:08:52.552 | 70.00th=[ 249], 80.00th=[ 262], 90.00th=[ 285], 95.00th=[ 326], 00:08:52.552 | 99.00th=[ 420], 99.50th=[ 515], 99.90th=[ 611], 99.95th=[ 611], 00:08:52.552 | 99.99th=[ 611] 00:08:52.552 bw ( KiB/s): min= 4096, max= 4096, per=20.44%, avg=4096.00, stdev= 0.00, samples=1 00:08:52.552 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:52.552 lat (usec) : 250=65.09%, 500=30.35%, 750=0.53% 00:08:52.552 lat (msec) : 2=0.18%, 10=0.18%, 50=3.68% 00:08:52.552 cpu : usr=0.78%, sys=1.18%, ctx=570, majf=0, minf=1 00:08:52.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:52.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.552 issued rwts: total=58,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:52.552 job3: (groupid=0, jobs=1): err= 0: pid=2855759: Fri Nov 15 11:27:32 2024 00:08:52.552 read: IOPS=1030, BW=4124KiB/s (4223kB/s)(4132KiB/1002msec) 00:08:52.552 slat (nsec): min=4521, max=61167, avg=15076.15, stdev=8705.08 00:08:52.552 clat (usec): min=182, max=41165, avg=634.11, stdev=3787.25 00:08:52.552 lat (usec): min=200, max=41173, avg=649.19, stdev=3787.38 00:08:52.552 clat percentiles (usec): 00:08:52.552 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 225], 00:08:52.552 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 251], 00:08:52.552 | 70.00th=[ 273], 80.00th=[ 330], 90.00th=[ 449], 95.00th=[ 502], 00:08:52.552 | 99.00th=[ 586], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:52.552 | 99.99th=[41157] 00:08:52.552 write: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec); 0 zone resets 00:08:52.552 slat (nsec): min=5520, max=71600, avg=15224.95, stdev=7729.66 00:08:52.552 clat (usec): min=136, max=383, avg=192.86, stdev=39.44 00:08:52.552 lat (usec): min=144, max=396, avg=208.08, stdev=42.48 00:08:52.552 clat percentiles (usec): 00:08:52.552 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:08:52.552 | 30.00th=[ 167], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 194], 00:08:52.552 | 70.00th=[ 204], 80.00th=[ 219], 90.00th=[ 235], 95.00th=[ 285], 00:08:52.552 | 99.00th=[ 338], 99.50th=[ 343], 99.90th=[ 359], 99.95th=[ 383], 00:08:52.552 | 99.99th=[ 383] 00:08:52.552 bw ( KiB/s): min= 4096, max= 8192, per=30.66%, avg=6144.00, stdev=2896.31, samples=2 00:08:52.552 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:08:52.552 lat (usec) : 250=79.64%, 500=18.26%, 750=1.75% 00:08:52.552 lat (msec) : 50=0.35% 00:08:52.552 cpu : usr=2.80%, sys=3.50%, ctx=2569, majf=0, minf=1 00:08:52.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:52.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.552 issued rwts: total=1033,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:52.552 00:08:52.552 Run status group 0 (all jobs): 00:08:52.552 READ: bw=12.6MiB/s (13.2MB/s), 227KiB/s-4623KiB/s (232kB/s-4734kB/s), io=12.9MiB (13.5MB), run=1001-1022msec 00:08:52.552 WRITE: bw=19.6MiB/s (20.5MB/s), 2004KiB/s-6138KiB/s (2052kB/s-6285kB/s), io=20.0MiB (21.0MB), run=1001-1022msec 00:08:52.552 00:08:52.552 Disk stats (read/write): 00:08:52.552 nvme0n1: ios=822/1024, merge=0/0, ticks=925/183, in_queue=1108, util=98.30% 00:08:52.552 nvme0n2: ios=1077/1536, merge=0/0, ticks=652/248, in_queue=900, util=97.06% 00:08:52.552 nvme0n3: ios=53/512, merge=0/0, ticks=718/116, in_queue=834, util=89.06% 00:08:52.552 nvme0n4: ios=1046/1536, merge=0/0, ticks=962/286, in_queue=1248, util=91.50% 00:08:52.552 11:27:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:52.552 [global] 00:08:52.552 thread=1 00:08:52.552 invalidate=1 00:08:52.552 rw=write 00:08:52.552 time_based=1 00:08:52.552 runtime=1 00:08:52.552 ioengine=libaio 00:08:52.552 direct=1 00:08:52.552 bs=4096 00:08:52.552 iodepth=128 00:08:52.552 norandommap=0 00:08:52.552 numjobs=1 00:08:52.552 00:08:52.552 verify_dump=1 00:08:52.552 verify_backlog=512 00:08:52.552 verify_state_save=0 00:08:52.552 do_verify=1 00:08:52.552 verify=crc32c-intel 00:08:52.552 [job0] 00:08:52.552 filename=/dev/nvme0n1 00:08:52.552 [job1] 00:08:52.552 filename=/dev/nvme0n2 00:08:52.552 [job2] 00:08:52.552 filename=/dev/nvme0n3 00:08:52.552 [job3] 00:08:52.552 filename=/dev/nvme0n4 00:08:52.552 Could not set queue depth (nvme0n1) 00:08:52.552 Could not set queue depth (nvme0n2) 00:08:52.552 Could not set queue depth (nvme0n3) 00:08:52.552 Could not set queue depth (nvme0n4) 00:08:52.810 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:52.810 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:52.810 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:52.810 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:52.810 fio-3.35 00:08:52.810 Starting 4 threads 00:08:54.183 00:08:54.183 job0: (groupid=0, jobs=1): err= 0: pid=2855983: Fri Nov 15 11:27:34 2024 00:08:54.183 read: IOPS=4955, BW=19.4MiB/s (20.3MB/s)(19.4MiB/1002msec) 00:08:54.183 slat (usec): min=2, max=19828, avg=98.31, stdev=613.52 00:08:54.183 clat (usec): min=736, max=32741, avg=12356.87, stdev=2321.54 00:08:54.183 lat (usec): min=2918, max=32758, avg=12455.18, stdev=2375.04 00:08:54.183 clat percentiles (usec): 00:08:54.183 | 1.00th=[ 5997], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[11338], 00:08:54.183 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:08:54.183 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14615], 95.00th=[15008], 00:08:54.183 | 99.00th=[17695], 99.50th=[29754], 99.90th=[29754], 99.95th=[29754], 00:08:54.183 | 99.99th=[32637] 00:08:54.183 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:08:54.183 slat (usec): min=4, max=17289, avg=93.49, stdev=625.74 00:08:54.183 clat (usec): min=5447, max=44644, avg=12758.35, stdev=3941.85 00:08:54.183 lat (usec): min=5465, max=44659, avg=12851.84, stdev=3985.29 00:08:54.183 clat percentiles (usec): 00:08:54.183 | 1.00th=[ 7767], 5.00th=[ 9110], 10.00th=[10683], 20.00th=[11338], 00:08:54.183 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12125], 60.00th=[12518], 00:08:54.183 | 70.00th=[12911], 80.00th=[13304], 90.00th=[14353], 95.00th=[16581], 00:08:54.183 | 99.00th=[38536], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:08:54.183 | 99.99th=[44827] 00:08:54.183 bw ( KiB/s): min=19656, max=21304, per=31.58%, avg=20480.00, stdev=1165.31, samples=2 00:08:54.183 iops : min= 4914, max= 5326, avg=5120.00, stdev=291.33, samples=2 00:08:54.183 lat (usec) : 750=0.01% 00:08:54.183 lat (msec) : 4=0.32%, 10=7.86%, 20=89.78%, 50=2.03% 00:08:54.183 cpu : usr=4.00%, sys=6.49%, ctx=366, majf=0, minf=1 00:08:54.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:54.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:54.183 issued rwts: total=4965,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.184 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:54.184 job1: (groupid=0, jobs=1): err= 0: pid=2855984: Fri Nov 15 11:27:34 2024 00:08:54.184 read: IOPS=2956, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1042msec) 00:08:54.184 slat (usec): min=2, max=17546, avg=149.92, stdev=999.46 00:08:54.184 clat (usec): min=8698, max=60314, avg=19522.26, stdev=12224.53 00:08:54.184 lat (usec): min=8907, max=60350, avg=19672.18, stdev=12321.98 00:08:54.184 clat percentiles (usec): 00:08:54.184 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[11338], 20.00th=[11731], 00:08:54.184 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12649], 60.00th=[13698], 00:08:54.184 | 70.00th=[17957], 80.00th=[28181], 90.00th=[44303], 95.00th=[46400], 00:08:54.184 | 99.00th=[49546], 99.50th=[51643], 99.90th=[56361], 99.95th=[60031], 00:08:54.184 | 99.99th=[60556] 00:08:54.184 write: IOPS=3439, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1042msec); 0 zone resets 00:08:54.184 slat (usec): min=4, max=26333, avg=144.17, stdev=928.10 00:08:54.184 clat (usec): min=7552, max=83150, avg=19981.86, stdev=13155.23 00:08:54.184 lat (usec): min=7565, max=83174, avg=20126.03, stdev=13226.55 00:08:54.184 clat percentiles (usec): 00:08:54.184 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[10945], 20.00th=[11600], 00:08:54.184 | 30.00th=[11994], 40.00th=[12518], 50.00th=[13566], 60.00th=[16909], 00:08:54.184 | 70.00th=[21103], 80.00th=[23725], 90.00th=[43779], 95.00th=[50594], 00:08:54.184 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[71828], 00:08:54.184 | 99.99th=[83362] 00:08:54.184 bw ( KiB/s): min= 8192, max=19536, per=21.38%, avg=13864.00, stdev=8021.42, samples=2 00:08:54.184 iops : min= 2048, max= 4884, avg=3466.00, stdev=2005.35, samples=2 00:08:54.184 lat (msec) : 10=4.10%, 20=65.64%, 50=27.05%, 100=3.21% 00:08:54.184 cpu : usr=2.59%, sys=4.03%, ctx=333, majf=0, minf=1 00:08:54.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:08:54.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:54.184 issued rwts: total=3081,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.184 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:54.184 job2: (groupid=0, jobs=1): err= 0: pid=2855985: Fri Nov 15 11:27:34 2024 00:08:54.184 read: IOPS=3849, BW=15.0MiB/s (15.8MB/s)(15.1MiB/1006msec) 00:08:54.184 slat (usec): min=2, max=16655, avg=124.18, stdev=795.70 00:08:54.184 clat (usec): min=1239, max=52142, avg=15969.64, stdev=5594.69 00:08:54.184 lat (usec): min=1248, max=52146, avg=16093.82, stdev=5621.80 00:08:54.184 clat percentiles (usec): 00:08:54.184 | 1.00th=[ 9372], 5.00th=[10945], 10.00th=[12387], 20.00th=[13042], 00:08:54.184 | 30.00th=[13960], 40.00th=[14484], 50.00th=[14615], 60.00th=[14877], 00:08:54.184 | 70.00th=[15401], 80.00th=[16581], 90.00th=[21365], 95.00th=[26084], 00:08:54.184 | 99.00th=[39060], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:08:54.184 | 99.99th=[52167] 00:08:54.184 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:08:54.184 slat (usec): min=3, max=19313, avg=120.35, stdev=764.20 00:08:54.184 clat (usec): min=180, max=57835, avg=15815.10, stdev=6385.48 00:08:54.184 lat (usec): min=223, max=57842, avg=15935.45, stdev=6408.16 00:08:54.184 clat percentiles (usec): 00:08:54.184 | 1.00th=[ 5211], 5.00th=[10421], 10.00th=[11076], 20.00th=[12911], 00:08:54.184 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14222], 60.00th=[14484], 00:08:54.184 | 70.00th=[15008], 80.00th=[18744], 90.00th=[23200], 95.00th=[24249], 00:08:54.184 | 99.00th=[49546], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:08:54.184 | 99.99th=[57934] 00:08:54.184 bw ( KiB/s): min=16384, max=16384, per=25.26%, avg=16384.00, stdev= 0.00, samples=2 00:08:54.184 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:08:54.184 lat (usec) : 250=0.01% 00:08:54.184 lat (msec) : 2=0.38%, 4=0.01%, 10=2.70%, 20=81.80%, 50=14.23% 00:08:54.184 lat (msec) : 100=0.87% 00:08:54.184 cpu : usr=2.79%, sys=5.57%, ctx=307, majf=0, minf=1 00:08:54.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:54.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:54.184 issued rwts: total=3873,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.184 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:54.184 job3: (groupid=0, jobs=1): err= 0: pid=2855986: Fri Nov 15 11:27:34 2024 00:08:54.184 read: IOPS=3691, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1005msec) 00:08:54.184 slat (usec): min=3, max=8375, avg=116.55, stdev=616.11 00:08:54.184 clat (usec): min=1133, max=28238, avg=15027.93, stdev=3282.58 00:08:54.184 lat (usec): min=6104, max=28467, avg=15144.48, stdev=3318.18 00:08:54.184 clat percentiles (usec): 00:08:54.184 | 1.00th=[ 8029], 5.00th=[10290], 10.00th=[12125], 20.00th=[13042], 00:08:54.184 | 30.00th=[13173], 40.00th=[13829], 50.00th=[14484], 60.00th=[15139], 00:08:54.184 | 70.00th=[16188], 80.00th=[17171], 90.00th=[18744], 95.00th=[21103], 00:08:54.184 | 99.00th=[26084], 99.50th=[26346], 99.90th=[28181], 99.95th=[28181], 00:08:54.184 | 99.99th=[28181] 00:08:54.184 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:08:54.184 slat (usec): min=4, max=9998, avg=130.41, stdev=722.31 00:08:54.184 clat (usec): min=7225, max=39406, avg=17340.47, stdev=6338.43 00:08:54.184 lat (usec): min=7478, max=39426, avg=17470.88, stdev=6387.96 00:08:54.184 clat percentiles (usec): 00:08:54.184 | 1.00th=[ 8455], 5.00th=[12387], 10.00th=[12780], 20.00th=[13173], 00:08:54.184 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14615], 60.00th=[15795], 00:08:54.184 | 70.00th=[16909], 80.00th=[21365], 90.00th=[26346], 95.00th=[32375], 00:08:54.184 | 99.00th=[38536], 99.50th=[38536], 99.90th=[39060], 99.95th=[39584], 00:08:54.184 | 99.99th=[39584] 00:08:54.184 bw ( KiB/s): min=16368, max=16384, per=25.25%, avg=16376.00, stdev=11.31, samples=2 00:08:54.184 iops : min= 4092, max= 4096, avg=4094.00, stdev= 2.83, samples=2 00:08:54.184 lat (msec) : 2=0.01%, 10=3.05%, 20=80.71%, 50=16.23% 00:08:54.184 cpu : usr=4.68%, sys=7.57%, ctx=367, majf=0, minf=1 00:08:54.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:54.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:54.184 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.184 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:54.184 00:08:54.184 Run status group 0 (all jobs): 00:08:54.184 READ: bw=58.6MiB/s (61.4MB/s), 11.5MiB/s-19.4MiB/s (12.1MB/s-20.3MB/s), io=61.1MiB (64.0MB), run=1002-1042msec 00:08:54.184 WRITE: bw=63.3MiB/s (66.4MB/s), 13.4MiB/s-20.0MiB/s (14.1MB/s-20.9MB/s), io=66.0MiB (69.2MB), run=1002-1042msec 00:08:54.184 00:08:54.184 Disk stats (read/write): 00:08:54.184 nvme0n1: ios=4148/4399, merge=0/0, ticks=25264/26347, in_queue=51611, util=93.69% 00:08:54.184 nvme0n2: ios=2606/2631, merge=0/0, ticks=15353/15879, in_queue=31232, util=95.94% 00:08:54.184 nvme0n3: ios=3413/3584, merge=0/0, ticks=23836/22630, in_queue=46466, util=97.19% 00:08:54.184 nvme0n4: ios=3191/3584, merge=0/0, ticks=22937/27501, in_queue=50438, util=100.00% 00:08:54.184 11:27:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:54.184 [global] 00:08:54.184 thread=1 00:08:54.184 invalidate=1 00:08:54.184 rw=randwrite 00:08:54.184 time_based=1 00:08:54.184 runtime=1 00:08:54.184 ioengine=libaio 00:08:54.184 direct=1 00:08:54.184 bs=4096 00:08:54.184 iodepth=128 00:08:54.184 norandommap=0 00:08:54.184 numjobs=1 00:08:54.184 00:08:54.184 verify_dump=1 00:08:54.184 verify_backlog=512 00:08:54.184 verify_state_save=0 00:08:54.184 do_verify=1 00:08:54.184 verify=crc32c-intel 00:08:54.184 [job0] 00:08:54.184 filename=/dev/nvme0n1 00:08:54.184 [job1] 00:08:54.184 filename=/dev/nvme0n2 00:08:54.184 [job2] 00:08:54.184 filename=/dev/nvme0n3 00:08:54.184 [job3] 00:08:54.184 filename=/dev/nvme0n4 00:08:54.184 Could not set queue depth (nvme0n1) 00:08:54.184 Could not set queue depth (nvme0n2) 00:08:54.184 Could not set queue depth (nvme0n3) 00:08:54.184 Could not set queue depth (nvme0n4) 00:08:54.184 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:54.184 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:54.184 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:54.184 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:54.184 fio-3.35 00:08:54.184 Starting 4 threads 00:08:55.558 00:08:55.558 job0: (groupid=0, jobs=1): err= 0: pid=2856218: Fri Nov 15 11:27:35 2024 00:08:55.558 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec) 00:08:55.558 slat (usec): min=2, max=8277, avg=121.44, stdev=704.44 00:08:55.558 clat (usec): min=7070, max=24790, avg=14943.09, stdev=2722.21 00:08:55.558 lat (usec): min=7075, max=27844, avg=15064.53, stdev=2782.16 00:08:55.558 clat percentiles (usec): 00:08:55.558 | 1.00th=[ 8586], 5.00th=[11076], 10.00th=[11863], 20.00th=[12911], 00:08:55.558 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14746], 60.00th=[15401], 00:08:55.558 | 70.00th=[15926], 80.00th=[16581], 90.00th=[18482], 95.00th=[20055], 00:08:55.558 | 99.00th=[23462], 99.50th=[24773], 99.90th=[24773], 99.95th=[24773], 00:08:55.558 | 99.99th=[24773] 00:08:55.558 write: IOPS=3150, BW=12.3MiB/s (12.9MB/s)(12.5MiB/1013msec); 0 zone resets 00:08:55.558 slat (usec): min=3, max=43012, avg=189.28, stdev=1202.65 00:08:55.558 clat (usec): min=4272, max=70096, avg=25726.84, stdev=16427.07 00:08:55.558 lat (usec): min=4278, max=70104, avg=25916.12, stdev=16506.40 00:08:55.558 clat percentiles (usec): 00:08:55.558 | 1.00th=[ 6652], 5.00th=[11338], 10.00th=[11994], 20.00th=[12911], 00:08:55.558 | 30.00th=[14615], 40.00th=[18220], 50.00th=[21890], 60.00th=[23200], 00:08:55.558 | 70.00th=[24249], 80.00th=[33162], 90.00th=[55837], 95.00th=[65799], 00:08:55.558 | 99.00th=[69731], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:08:55.558 | 99.99th=[69731] 00:08:55.558 bw ( KiB/s): min= 9680, max=14904, per=18.30%, avg=12292.00, stdev=3693.93, samples=2 00:08:55.558 iops : min= 2420, max= 3726, avg=3073.00, stdev=923.48, samples=2 00:08:55.558 lat (msec) : 10=3.07%, 20=66.26%, 50=23.34%, 100=7.33% 00:08:55.558 cpu : usr=2.47%, sys=4.55%, ctx=332, majf=0, minf=1 00:08:55.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:08:55.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:55.558 issued rwts: total=3072,3191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.558 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:55.558 job1: (groupid=0, jobs=1): err= 0: pid=2856219: Fri Nov 15 11:27:35 2024 00:08:55.558 read: IOPS=5063, BW=19.8MiB/s (20.7MB/s)(19.9MiB/1004msec) 00:08:55.558 slat (usec): min=2, max=11294, avg=97.72, stdev=586.09 00:08:55.558 clat (usec): min=3047, max=41813, avg=12786.93, stdev=5473.94 00:08:55.558 lat (usec): min=3055, max=43437, avg=12884.65, stdev=5525.29 00:08:55.558 clat percentiles (usec): 00:08:55.558 | 1.00th=[ 5669], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10421], 00:08:55.558 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11600], 00:08:55.558 | 70.00th=[12125], 80.00th=[13566], 90.00th=[15139], 95.00th=[29230], 00:08:55.558 | 99.00th=[33817], 99.50th=[37487], 99.90th=[41157], 99.95th=[41681], 00:08:55.558 | 99.99th=[41681] 00:08:55.558 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:08:55.558 slat (usec): min=3, max=6263, avg=89.71, stdev=417.25 00:08:55.558 clat (usec): min=4314, max=37401, avg=12058.07, stdev=3513.93 00:08:55.558 lat (usec): min=5087, max=37425, avg=12147.77, stdev=3540.16 00:08:55.558 clat percentiles (usec): 00:08:55.558 | 1.00th=[ 7111], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10552], 00:08:55.558 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:08:55.558 | 70.00th=[11863], 80.00th=[12387], 90.00th=[14222], 95.00th=[19006], 00:08:55.558 | 99.00th=[28705], 99.50th=[34866], 99.90th=[37487], 99.95th=[37487], 00:08:55.558 | 99.99th=[37487] 00:08:55.558 bw ( KiB/s): min=18808, max=22152, per=30.48%, avg=20480.00, stdev=2364.57, samples=2 00:08:55.558 iops : min= 4702, max= 5538, avg=5120.00, stdev=591.14, samples=2 00:08:55.558 lat (msec) : 4=0.24%, 10=12.02%, 20=81.95%, 50=5.79% 00:08:55.558 cpu : usr=5.08%, sys=8.28%, ctx=586, majf=0, minf=1 00:08:55.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:55.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:55.558 issued rwts: total=5084,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.558 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:55.558 job2: (groupid=0, jobs=1): err= 0: pid=2856220: Fri Nov 15 11:27:35 2024 00:08:55.558 read: IOPS=3136, BW=12.3MiB/s (12.8MB/s)(12.4MiB/1012msec) 00:08:55.558 slat (usec): min=2, max=13149, avg=134.56, stdev=867.90 00:08:55.558 clat (usec): min=4652, max=37319, avg=15619.14, stdev=5286.48 00:08:55.558 lat (usec): min=4657, max=37336, avg=15753.70, stdev=5348.04 00:08:55.558 clat percentiles (usec): 00:08:55.558 | 1.00th=[ 6456], 5.00th=[ 9634], 10.00th=[11338], 20.00th=[12911], 00:08:55.558 | 30.00th=[13566], 40.00th=[13829], 50.00th=[13829], 60.00th=[13960], 00:08:55.558 | 70.00th=[15795], 80.00th=[17957], 90.00th=[22414], 95.00th=[28181], 00:08:55.558 | 99.00th=[33817], 99.50th=[35914], 99.90th=[37487], 99.95th=[37487], 00:08:55.558 | 99.99th=[37487] 00:08:55.558 write: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec); 0 zone resets 00:08:55.558 slat (usec): min=3, max=11538, avg=149.30, stdev=614.86 00:08:55.558 clat (usec): min=1084, max=52804, avg=21886.48, stdev=12269.27 00:08:55.558 lat (usec): min=2641, max=52820, avg=22035.78, stdev=12356.69 00:08:55.558 clat percentiles (usec): 00:08:55.558 | 1.00th=[ 4621], 5.00th=[ 7504], 10.00th=[ 9372], 20.00th=[12649], 00:08:55.558 | 30.00th=[13304], 40.00th=[13829], 50.00th=[16450], 60.00th=[23200], 00:08:55.558 | 70.00th=[24511], 80.00th=[34866], 90.00th=[42730], 95.00th=[45876], 00:08:55.558 | 99.00th=[50070], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:08:55.558 | 99.99th=[52691] 00:08:55.558 bw ( KiB/s): min=12088, max=16384, per=21.19%, avg=14236.00, stdev=3037.73, samples=2 00:08:55.558 iops : min= 3022, max= 4096, avg=3559.00, stdev=759.43, samples=2 00:08:55.558 lat (msec) : 2=0.01%, 4=0.27%, 10=9.69%, 20=57.31%, 50=32.29% 00:08:55.558 lat (msec) : 100=0.43% 00:08:55.558 cpu : usr=3.66%, sys=7.22%, ctx=437, majf=0, minf=1 00:08:55.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:08:55.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:55.558 issued rwts: total=3174,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.558 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:55.558 job3: (groupid=0, jobs=1): err= 0: pid=2856221: Fri Nov 15 11:27:35 2024 00:08:55.558 read: IOPS=4752, BW=18.6MiB/s (19.5MB/s)(18.6MiB/1002msec) 00:08:55.558 slat (usec): min=2, max=10950, avg=103.10, stdev=649.83 00:08:55.558 clat (usec): min=1462, max=24241, avg=13243.16, stdev=2624.81 00:08:55.558 lat (usec): min=1474, max=24257, avg=13346.26, stdev=2648.59 00:08:55.558 clat percentiles (usec): 00:08:55.558 | 1.00th=[ 5342], 5.00th=[ 9634], 10.00th=[10814], 20.00th=[11600], 00:08:55.558 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13173], 60.00th=[13566], 00:08:55.558 | 70.00th=[13960], 80.00th=[14746], 90.00th=[15926], 95.00th=[17695], 00:08:55.558 | 99.00th=[22414], 99.50th=[23200], 99.90th=[24249], 99.95th=[24249], 00:08:55.558 | 99.99th=[24249] 00:08:55.558 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:08:55.558 slat (usec): min=4, max=11376, avg=89.45, stdev=545.58 00:08:55.558 clat (usec): min=1322, max=25418, avg=12510.24, stdev=2370.13 00:08:55.558 lat (usec): min=1332, max=25463, avg=12599.69, stdev=2423.39 00:08:55.558 clat percentiles (usec): 00:08:55.558 | 1.00th=[ 4883], 5.00th=[ 7635], 10.00th=[10028], 20.00th=[11469], 00:08:55.558 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12911], 60.00th=[13304], 00:08:55.558 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14222], 95.00th=[14746], 00:08:55.558 | 99.00th=[21103], 99.50th=[23200], 99.90th=[23987], 99.95th=[24249], 00:08:55.558 | 99.99th=[25297] 00:08:55.558 bw ( KiB/s): min=20480, max=20480, per=30.48%, avg=20480.00, stdev= 0.00, samples=2 00:08:55.558 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:08:55.558 lat (msec) : 2=0.16%, 4=0.14%, 10=7.54%, 20=90.08%, 50=2.07% 00:08:55.558 cpu : usr=6.79%, sys=8.89%, ctx=511, majf=0, minf=1 00:08:55.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:55.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:55.558 issued rwts: total=4762,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.558 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:55.558 00:08:55.558 Run status group 0 (all jobs): 00:08:55.558 READ: bw=62.1MiB/s (65.1MB/s), 11.8MiB/s-19.8MiB/s (12.4MB/s-20.7MB/s), io=62.9MiB (65.9MB), run=1002-1013msec 00:08:55.558 WRITE: bw=65.6MiB/s (68.8MB/s), 12.3MiB/s-20.0MiB/s (12.9MB/s-20.9MB/s), io=66.5MiB (69.7MB), run=1002-1013msec 00:08:55.558 00:08:55.558 Disk stats (read/write): 00:08:55.558 nvme0n1: ios=2598/2775, merge=0/0, ticks=19484/30476, in_queue=49960, util=97.60% 00:08:55.558 nvme0n2: ios=4115/4400, merge=0/0, ticks=19708/21076, in_queue=40784, util=97.05% 00:08:55.558 nvme0n3: ios=2785/3072, merge=0/0, ticks=41915/62262, in_queue=104177, util=88.96% 00:08:55.558 nvme0n4: ios=4096/4359, merge=0/0, ticks=42704/41549, in_queue=84253, util=89.62% 00:08:55.558 11:27:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:55.558 11:27:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2856363 00:08:55.559 11:27:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:55.559 11:27:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:55.559 [global] 00:08:55.559 thread=1 00:08:55.559 invalidate=1 00:08:55.559 rw=read 00:08:55.559 time_based=1 00:08:55.559 runtime=10 00:08:55.559 ioengine=libaio 00:08:55.559 direct=1 00:08:55.559 bs=4096 00:08:55.559 iodepth=1 00:08:55.559 norandommap=1 00:08:55.559 numjobs=1 00:08:55.559 00:08:55.559 [job0] 00:08:55.559 filename=/dev/nvme0n1 00:08:55.559 [job1] 00:08:55.559 filename=/dev/nvme0n2 00:08:55.559 [job2] 00:08:55.559 filename=/dev/nvme0n3 00:08:55.559 [job3] 00:08:55.559 filename=/dev/nvme0n4 00:08:55.559 Could not set queue depth (nvme0n1) 00:08:55.559 Could not set queue depth (nvme0n2) 00:08:55.559 Could not set queue depth (nvme0n3) 00:08:55.559 Could not set queue depth (nvme0n4) 00:08:55.816 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:55.816 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:55.816 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:55.816 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:55.816 fio-3.35 00:08:55.816 Starting 4 threads 00:08:58.342 11:27:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:58.908 11:27:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:58.908 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=356352, buflen=4096 00:08:58.908 fio: pid=2856571, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:58.908 11:27:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:58.908 11:27:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:59.165 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=3440640, buflen=4096 00:08:59.165 fio: pid=2856570, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:59.423 11:27:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:59.423 11:27:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:59.423 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=3284992, buflen=4096 00:08:59.423 fio: pid=2856567, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:59.682 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=18001920, buflen=4096 00:08:59.682 fio: pid=2856569, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:59.682 11:27:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:59.682 11:27:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:59.682 00:08:59.682 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2856567: Fri Nov 15 11:27:39 2024 00:08:59.682 read: IOPS=229, BW=918KiB/s (940kB/s)(3208KiB/3494msec) 00:08:59.682 slat (usec): min=4, max=10954, avg=31.45, stdev=386.14 00:08:59.682 clat (usec): min=166, max=49139, avg=4290.60, stdev=12186.28 00:08:59.682 lat (usec): min=171, max=52994, avg=4322.04, stdev=12236.29 00:08:59.682 clat percentiles (usec): 00:08:59.682 | 1.00th=[ 176], 5.00th=[ 225], 10.00th=[ 247], 20.00th=[ 269], 00:08:59.682 | 30.00th=[ 281], 40.00th=[ 302], 50.00th=[ 355], 60.00th=[ 404], 00:08:59.682 | 70.00th=[ 449], 80.00th=[ 486], 90.00th=[ 553], 95.00th=[42206], 00:08:59.682 | 99.00th=[42206], 99.50th=[42206], 99.90th=[49021], 99.95th=[49021], 00:08:59.682 | 99.99th=[49021] 00:08:59.682 bw ( KiB/s): min= 96, max= 5848, per=16.21%, avg=1054.67, stdev=2348.24, samples=6 00:08:59.682 iops : min= 24, max= 1462, avg=263.67, stdev=587.06, samples=6 00:08:59.682 lat (usec) : 250=10.83%, 500=72.35%, 750=7.22% 00:08:59.682 lat (msec) : 50=9.46% 00:08:59.682 cpu : usr=0.09%, sys=0.52%, ctx=805, majf=0, minf=2 00:08:59.682 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.682 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.682 issued rwts: total=803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.682 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.682 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2856569: Fri Nov 15 11:27:39 2024 00:08:59.682 read: IOPS=1166, BW=4667KiB/s (4779kB/s)(17.2MiB/3767msec) 00:08:59.682 slat (usec): min=5, max=34435, avg=29.10, stdev=609.46 00:08:59.682 clat (usec): min=168, max=42118, avg=819.70, stdev=4873.35 00:08:59.682 lat (usec): min=174, max=42153, avg=848.80, stdev=4910.37 00:08:59.682 clat percentiles (usec): 00:08:59.682 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 196], 00:08:59.682 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 221], 60.00th=[ 229], 00:08:59.682 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[ 306], 00:08:59.682 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:59.682 | 99.99th=[42206] 00:08:59.682 bw ( KiB/s): min= 504, max=13528, per=61.14%, avg=3976.00, stdev=4592.19, samples=7 00:08:59.682 iops : min= 126, max= 3382, avg=994.00, stdev=1148.05, samples=7 00:08:59.682 lat (usec) : 250=81.78%, 500=15.67%, 750=1.07% 00:08:59.682 lat (msec) : 50=1.46% 00:08:59.682 cpu : usr=0.96%, sys=1.81%, ctx=4405, majf=0, minf=2 00:08:59.682 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.682 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.682 issued rwts: total=4396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.682 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.682 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2856570: Fri Nov 15 11:27:39 2024 00:08:59.682 read: IOPS=262, BW=1047KiB/s (1072kB/s)(3360KiB/3209msec) 00:08:59.682 slat (nsec): min=4367, max=53393, avg=16063.27, stdev=9095.52 00:08:59.682 clat (usec): min=189, max=41333, avg=3772.35, stdev=11377.67 00:08:59.682 lat (usec): min=194, max=41367, avg=3788.42, stdev=11379.91 00:08:59.682 clat percentiles (usec): 00:08:59.682 | 1.00th=[ 204], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 245], 00:08:59.682 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 302], 00:08:59.682 | 70.00th=[ 318], 80.00th=[ 363], 90.00th=[ 449], 95.00th=[41157], 00:08:59.682 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:59.682 | 99.99th=[41157] 00:08:59.682 bw ( KiB/s): min= 104, max= 5992, per=17.10%, avg=1112.00, stdev=2391.20, samples=6 00:08:59.682 iops : min= 26, max= 1498, avg=278.00, stdev=597.80, samples=6 00:08:59.682 lat (usec) : 250=22.47%, 500=68.25%, 750=0.59% 00:08:59.682 lat (msec) : 50=8.56% 00:08:59.682 cpu : usr=0.12%, sys=0.50%, ctx=841, majf=0, minf=2 00:08:59.682 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.682 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.682 issued rwts: total=841,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.683 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2856571: Fri Nov 15 11:27:39 2024 00:08:59.683 read: IOPS=29, BW=118KiB/s (121kB/s)(348KiB/2939msec) 00:08:59.683 slat (nsec): min=6232, max=37669, avg=22311.85, stdev=9801.02 00:08:59.683 clat (usec): min=203, max=41288, avg=33494.29, stdev=15799.64 00:08:59.683 lat (usec): min=221, max=41296, avg=33516.71, stdev=15801.40 00:08:59.683 clat percentiles (usec): 00:08:59.683 | 1.00th=[ 204], 5.00th=[ 293], 10.00th=[ 429], 20.00th=[40633], 00:08:59.683 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:59.683 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:59.683 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:59.683 | 99.99th=[41157] 00:08:59.683 bw ( KiB/s): min= 96, max= 192, per=1.85%, avg=120.00, stdev=40.40, samples=5 00:08:59.683 iops : min= 24, max= 48, avg=30.00, stdev=10.10, samples=5 00:08:59.683 lat (usec) : 250=1.14%, 500=14.77%, 750=2.27% 00:08:59.683 lat (msec) : 50=80.68% 00:08:59.683 cpu : usr=0.10%, sys=0.00%, ctx=88, majf=0, minf=1 00:08:59.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:59.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.683 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.683 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:59.683 00:08:59.683 Run status group 0 (all jobs): 00:08:59.683 READ: bw=6503KiB/s (6659kB/s), 118KiB/s-4667KiB/s (121kB/s-4779kB/s), io=23.9MiB (25.1MB), run=2939-3767msec 00:08:59.683 00:08:59.683 Disk stats (read/write): 00:08:59.683 nvme0n1: ios=799/0, merge=0/0, ticks=3289/0, in_queue=3289, util=95.77% 00:08:59.683 nvme0n2: ios=3824/0, merge=0/0, ticks=3437/0, in_queue=3437, util=94.99% 00:08:59.683 nvme0n3: ios=838/0, merge=0/0, ticks=3074/0, in_queue=3074, util=96.82% 00:08:59.683 nvme0n4: ios=85/0, merge=0/0, ticks=2835/0, in_queue=2835, util=96.75% 00:08:59.941 11:27:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:59.941 11:27:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:00.199 11:27:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:00.199 11:27:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:00.457 11:27:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:00.457 11:27:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:00.715 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:00.715 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:00.972 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:00.972 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2856363 00:09:00.972 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:00.972 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:01.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.230 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:01.230 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:01.230 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:01.230 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.230 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:01.230 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:01.230 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:01.230 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:01.230 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:01.230 nvmf hotplug test: fio failed as expected 00:09:01.230 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:01.488 rmmod nvme_tcp 00:09:01.488 rmmod nvme_fabrics 00:09:01.488 rmmod nvme_keyring 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2854442 ']' 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2854442 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2854442 ']' 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2854442 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2854442 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2854442' 00:09:01.488 killing process with pid 2854442 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2854442 00:09:01.488 11:27:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2854442 00:09:01.747 11:27:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:01.747 11:27:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:01.747 11:27:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:01.747 11:27:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:01.747 11:27:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:01.747 11:27:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:01.747 11:27:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:01.747 11:27:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:01.747 11:27:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:01.747 11:27:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.747 11:27:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.747 11:27:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:04.285 00:09:04.285 real 0m24.220s 00:09:04.285 user 1m25.864s 00:09:04.285 sys 0m6.274s 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:04.285 ************************************ 00:09:04.285 END TEST nvmf_fio_target 00:09:04.285 ************************************ 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:04.285 ************************************ 00:09:04.285 START TEST nvmf_bdevio 00:09:04.285 ************************************ 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:04.285 * Looking for test storage... 00:09:04.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:04.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.285 --rc genhtml_branch_coverage=1 00:09:04.285 --rc genhtml_function_coverage=1 00:09:04.285 --rc genhtml_legend=1 00:09:04.285 --rc geninfo_all_blocks=1 00:09:04.285 --rc geninfo_unexecuted_blocks=1 00:09:04.285 00:09:04.285 ' 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:04.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.285 --rc genhtml_branch_coverage=1 00:09:04.285 --rc genhtml_function_coverage=1 00:09:04.285 --rc genhtml_legend=1 00:09:04.285 --rc geninfo_all_blocks=1 00:09:04.285 --rc geninfo_unexecuted_blocks=1 00:09:04.285 00:09:04.285 ' 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:04.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.285 --rc genhtml_branch_coverage=1 00:09:04.285 --rc genhtml_function_coverage=1 00:09:04.285 --rc genhtml_legend=1 00:09:04.285 --rc geninfo_all_blocks=1 00:09:04.285 --rc geninfo_unexecuted_blocks=1 00:09:04.285 00:09:04.285 ' 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:04.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.285 --rc genhtml_branch_coverage=1 00:09:04.285 --rc genhtml_function_coverage=1 00:09:04.285 --rc genhtml_legend=1 00:09:04.285 --rc geninfo_all_blocks=1 00:09:04.285 --rc geninfo_unexecuted_blocks=1 00:09:04.285 00:09:04.285 ' 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:04.285 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:04.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:04.286 11:27:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:06.188 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:06.188 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:06.188 Found net devices under 0000:09:00.0: cvl_0_0 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.188 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:06.189 Found net devices under 0000:09:00.1: cvl_0_1 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:06.189 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:06.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:09:06.516 00:09:06.516 --- 10.0.0.2 ping statistics --- 00:09:06.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.516 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:09:06.516 00:09:06.516 --- 10.0.0.1 ping statistics --- 00:09:06.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.516 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2859224 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2859224 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2859224 ']' 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.516 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.516 [2024-11-15 11:27:46.728458] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:09:06.516 [2024-11-15 11:27:46.728530] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.516 [2024-11-15 11:27:46.800358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.516 [2024-11-15 11:27:46.861394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.516 [2024-11-15 11:27:46.861449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.516 [2024-11-15 11:27:46.861479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.516 [2024-11-15 11:27:46.861491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.516 [2024-11-15 11:27:46.861500] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.516 [2024-11-15 11:27:46.863217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:06.516 [2024-11-15 11:27:46.863280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:06.516 [2024-11-15 11:27:46.863330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:06.516 [2024-11-15 11:27:46.863333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.827 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.827 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:06.827 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:06.827 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:06.827 11:27:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.827 [2024-11-15 11:27:47.025297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.827 Malloc0 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:06.827 [2024-11-15 11:27:47.098915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:06.827 { 00:09:06.827 "params": { 00:09:06.827 "name": "Nvme$subsystem", 00:09:06.827 "trtype": "$TEST_TRANSPORT", 00:09:06.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.827 "adrfam": "ipv4", 00:09:06.827 "trsvcid": "$NVMF_PORT", 00:09:06.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.827 "hdgst": ${hdgst:-false}, 00:09:06.827 "ddgst": ${ddgst:-false} 00:09:06.827 }, 00:09:06.827 "method": "bdev_nvme_attach_controller" 00:09:06.827 } 00:09:06.827 EOF 00:09:06.827 )") 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:06.827 11:27:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:06.827 "params": { 00:09:06.827 "name": "Nvme1", 00:09:06.827 "trtype": "tcp", 00:09:06.827 "traddr": "10.0.0.2", 00:09:06.827 "adrfam": "ipv4", 00:09:06.827 "trsvcid": "4420", 00:09:06.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:06.827 "hdgst": false, 00:09:06.827 "ddgst": false 00:09:06.827 }, 00:09:06.827 "method": "bdev_nvme_attach_controller" 00:09:06.827 }' 00:09:06.827 [2024-11-15 11:27:47.154396] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:09:06.827 [2024-11-15 11:27:47.154475] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859254 ] 00:09:06.827 [2024-11-15 11:27:47.228373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:07.091 [2024-11-15 11:27:47.294590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.091 [2024-11-15 11:27:47.294642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.091 [2024-11-15 11:27:47.294646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.349 I/O targets: 00:09:07.349 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:07.349 00:09:07.349 00:09:07.349 CUnit - A unit testing framework for C - Version 2.1-3 00:09:07.349 http://cunit.sourceforge.net/ 00:09:07.349 00:09:07.349 00:09:07.349 Suite: bdevio tests on: Nvme1n1 00:09:07.349 Test: blockdev write read block ...passed 00:09:07.349 Test: blockdev write zeroes read block ...passed 00:09:07.349 Test: blockdev write zeroes read no split ...passed 00:09:07.349 Test: blockdev write zeroes read split ...passed 00:09:07.349 Test: blockdev write zeroes read split partial ...passed 00:09:07.349 Test: blockdev reset ...[2024-11-15 11:27:47.753547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:07.349 [2024-11-15 11:27:47.753644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3d640 (9): Bad file descriptor 00:09:07.349 [2024-11-15 11:27:47.770996] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:07.349 passed 00:09:07.607 Test: blockdev write read 8 blocks ...passed 00:09:07.607 Test: blockdev write read size > 128k ...passed 00:09:07.607 Test: blockdev write read invalid size ...passed 00:09:07.607 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:07.607 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:07.607 Test: blockdev write read max offset ...passed 00:09:07.607 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:07.607 Test: blockdev writev readv 8 blocks ...passed 00:09:07.607 Test: blockdev writev readv 30 x 1block ...passed 00:09:07.607 Test: blockdev writev readv block ...passed 00:09:07.607 Test: blockdev writev readv size > 128k ...passed 00:09:07.607 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:07.607 Test: blockdev comparev and writev ...[2024-11-15 11:27:47.982671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.607 [2024-11-15 11:27:47.982708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:07.607 [2024-11-15 11:27:47.982734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.607 [2024-11-15 11:27:47.982751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:07.607 [2024-11-15 11:27:47.983073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.607 [2024-11-15 11:27:47.983098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:07.607 [2024-11-15 11:27:47.983121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.607 [2024-11-15 11:27:47.983138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:07.607 [2024-11-15 11:27:47.983466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.607 [2024-11-15 11:27:47.983491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:07.607 [2024-11-15 11:27:47.983513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.607 [2024-11-15 11:27:47.983529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:07.607 [2024-11-15 11:27:47.983853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.607 [2024-11-15 11:27:47.983877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:07.607 [2024-11-15 11:27:47.983898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:07.607 [2024-11-15 11:27:47.983914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:07.607 passed 00:09:07.865 Test: blockdev nvme passthru rw ...passed 00:09:07.865 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:27:48.065550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:07.865 [2024-11-15 11:27:48.065581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:07.865 [2024-11-15 11:27:48.065739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:07.865 [2024-11-15 11:27:48.065763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:07.865 [2024-11-15 11:27:48.065913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:07.865 [2024-11-15 11:27:48.065937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:07.865 [2024-11-15 11:27:48.066082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:07.865 [2024-11-15 11:27:48.066107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:07.865 passed 00:09:07.865 Test: blockdev nvme admin passthru ...passed 00:09:07.865 Test: blockdev copy ...passed 00:09:07.865 00:09:07.865 Run Summary: Type Total Ran Passed Failed Inactive 00:09:07.865 suites 1 1 n/a 0 0 00:09:07.865 tests 23 23 23 0 0 00:09:07.865 asserts 152 152 152 0 n/a 00:09:07.865 00:09:07.865 Elapsed time = 0.967 seconds 00:09:08.123 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.123 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.123 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:08.123 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.124 rmmod nvme_tcp 00:09:08.124 rmmod nvme_fabrics 00:09:08.124 rmmod nvme_keyring 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2859224 ']' 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2859224 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2859224 ']' 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2859224 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2859224 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2859224' 00:09:08.124 killing process with pid 2859224 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2859224 00:09:08.124 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2859224 00:09:08.383 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:08.383 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:08.383 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:08.383 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:08.383 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:08.383 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:08.383 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:08.383 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.383 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:08.383 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.383 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.383 11:27:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:10.923 00:09:10.923 real 0m6.559s 00:09:10.923 user 0m10.416s 00:09:10.923 sys 0m2.248s 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:10.923 ************************************ 00:09:10.923 END TEST nvmf_bdevio 00:09:10.923 ************************************ 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:10.923 00:09:10.923 real 3m57.096s 00:09:10.923 user 10m22.629s 00:09:10.923 sys 1m6.515s 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:10.923 ************************************ 00:09:10.923 END TEST nvmf_target_core 00:09:10.923 ************************************ 00:09:10.923 11:27:50 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:10.923 11:27:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:10.923 11:27:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.923 11:27:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:10.923 ************************************ 00:09:10.923 START TEST nvmf_target_extra 00:09:10.923 ************************************ 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:10.923 * Looking for test storage... 00:09:10.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:10.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.923 --rc genhtml_branch_coverage=1 00:09:10.923 --rc genhtml_function_coverage=1 00:09:10.923 --rc genhtml_legend=1 00:09:10.923 --rc geninfo_all_blocks=1 00:09:10.923 --rc geninfo_unexecuted_blocks=1 00:09:10.923 00:09:10.923 ' 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:10.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.923 --rc genhtml_branch_coverage=1 00:09:10.923 --rc genhtml_function_coverage=1 00:09:10.923 --rc genhtml_legend=1 00:09:10.923 --rc geninfo_all_blocks=1 00:09:10.923 --rc geninfo_unexecuted_blocks=1 00:09:10.923 00:09:10.923 ' 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:10.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.923 --rc genhtml_branch_coverage=1 00:09:10.923 --rc genhtml_function_coverage=1 00:09:10.923 --rc genhtml_legend=1 00:09:10.923 --rc geninfo_all_blocks=1 00:09:10.923 --rc geninfo_unexecuted_blocks=1 00:09:10.923 00:09:10.923 ' 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:10.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.923 --rc genhtml_branch_coverage=1 00:09:10.923 --rc genhtml_function_coverage=1 00:09:10.923 --rc genhtml_legend=1 00:09:10.923 --rc geninfo_all_blocks=1 00:09:10.923 --rc geninfo_unexecuted_blocks=1 00:09:10.923 00:09:10.923 ' 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.923 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:10.924 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.924 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.924 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.924 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.924 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.924 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.924 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.924 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.924 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.924 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:10.924 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:10.924 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:10.924 11:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:10.924 11:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:10.924 11:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.924 11:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:10.924 ************************************ 00:09:10.924 START TEST nvmf_example 00:09:10.924 ************************************ 00:09:10.924 11:27:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:10.924 * Looking for test storage... 00:09:10.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:10.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.924 --rc genhtml_branch_coverage=1 00:09:10.924 --rc genhtml_function_coverage=1 00:09:10.924 --rc genhtml_legend=1 00:09:10.924 --rc geninfo_all_blocks=1 00:09:10.924 --rc geninfo_unexecuted_blocks=1 00:09:10.924 00:09:10.924 ' 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:10.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.924 --rc genhtml_branch_coverage=1 00:09:10.924 --rc genhtml_function_coverage=1 00:09:10.924 --rc genhtml_legend=1 00:09:10.924 --rc geninfo_all_blocks=1 00:09:10.924 --rc geninfo_unexecuted_blocks=1 00:09:10.924 00:09:10.924 ' 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:10.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.924 --rc genhtml_branch_coverage=1 00:09:10.924 --rc genhtml_function_coverage=1 00:09:10.924 --rc genhtml_legend=1 00:09:10.924 --rc geninfo_all_blocks=1 00:09:10.924 --rc geninfo_unexecuted_blocks=1 00:09:10.924 00:09:10.924 ' 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:10.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.924 --rc genhtml_branch_coverage=1 00:09:10.924 --rc genhtml_function_coverage=1 00:09:10.924 --rc genhtml_legend=1 00:09:10.924 --rc geninfo_all_blocks=1 00:09:10.924 --rc geninfo_unexecuted_blocks=1 00:09:10.924 00:09:10.924 ' 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.924 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:10.925 11:27:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:13.458 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:13.458 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:13.459 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:13.459 Found net devices under 0000:09:00.0: cvl_0_0 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:13.459 Found net devices under 0000:09:00.1: cvl_0_1 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:13.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:09:13.459 00:09:13.459 --- 10.0.0.2 ping statistics --- 00:09:13.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.459 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:13.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:09:13.459 00:09:13.459 --- 10.0.0.1 ping statistics --- 00:09:13.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.459 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2861510 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2861510 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2861510 ']' 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.459 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:13.717 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.717 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:13.717 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:13.717 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.717 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:13.717 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.717 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:13.717 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:13.717 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.717 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:13.717 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.717 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.717 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.717 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:13.717 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.717 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:13.717 11:27:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:25.913 Initializing NVMe Controllers 00:09:25.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:25.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:25.913 Initialization complete. Launching workers. 00:09:25.913 ======================================================== 00:09:25.913 Latency(us) 00:09:25.913 Device Information : IOPS MiB/s Average min max 00:09:25.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14542.30 56.81 4400.74 882.81 16074.21 00:09:25.913 ======================================================== 00:09:25.913 Total : 14542.30 56.81 4400.74 882.81 16074.21 00:09:25.913 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.913 rmmod nvme_tcp 00:09:25.913 rmmod nvme_fabrics 00:09:25.913 rmmod nvme_keyring 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2861510 ']' 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2861510 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2861510 ']' 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2861510 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2861510 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2861510' 00:09:25.913 killing process with pid 2861510 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2861510 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2861510 00:09:25.913 nvmf threads initialize successfully 00:09:25.913 bdev subsystem init successfully 00:09:25.913 created a nvmf target service 00:09:25.913 create targets's poll groups done 00:09:25.913 all subsystems of target started 00:09:25.913 nvmf target is running 00:09:25.913 all subsystems of target stopped 00:09:25.913 destroy targets's poll groups done 00:09:25.913 destroyed the nvmf target service 00:09:25.913 bdev subsystem finish successfully 00:09:25.913 nvmf threads destroy successfully 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.913 11:28:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.171 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.171 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:26.171 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:26.171 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:26.430 00:09:26.430 real 0m15.616s 00:09:26.430 user 0m42.799s 00:09:26.430 sys 0m3.458s 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:26.430 ************************************ 00:09:26.430 END TEST nvmf_example 00:09:26.430 ************************************ 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:26.430 ************************************ 00:09:26.430 START TEST nvmf_filesystem 00:09:26.430 ************************************ 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:26.430 * Looking for test storage... 00:09:26.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.430 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:26.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.431 --rc genhtml_branch_coverage=1 00:09:26.431 --rc genhtml_function_coverage=1 00:09:26.431 --rc genhtml_legend=1 00:09:26.431 --rc geninfo_all_blocks=1 00:09:26.431 --rc geninfo_unexecuted_blocks=1 00:09:26.431 00:09:26.431 ' 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:26.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.431 --rc genhtml_branch_coverage=1 00:09:26.431 --rc genhtml_function_coverage=1 00:09:26.431 --rc genhtml_legend=1 00:09:26.431 --rc geninfo_all_blocks=1 00:09:26.431 --rc geninfo_unexecuted_blocks=1 00:09:26.431 00:09:26.431 ' 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:26.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.431 --rc genhtml_branch_coverage=1 00:09:26.431 --rc genhtml_function_coverage=1 00:09:26.431 --rc genhtml_legend=1 00:09:26.431 --rc geninfo_all_blocks=1 00:09:26.431 --rc geninfo_unexecuted_blocks=1 00:09:26.431 00:09:26.431 ' 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:26.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.431 --rc genhtml_branch_coverage=1 00:09:26.431 --rc genhtml_function_coverage=1 00:09:26.431 --rc genhtml_legend=1 00:09:26.431 --rc geninfo_all_blocks=1 00:09:26.431 --rc geninfo_unexecuted_blocks=1 00:09:26.431 00:09:26.431 ' 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:26.431 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:26.432 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:26.432 #define SPDK_CONFIG_H 00:09:26.432 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:26.432 #define SPDK_CONFIG_APPS 1 00:09:26.432 #define SPDK_CONFIG_ARCH native 00:09:26.432 #undef SPDK_CONFIG_ASAN 00:09:26.432 #undef SPDK_CONFIG_AVAHI 00:09:26.432 #undef SPDK_CONFIG_CET 00:09:26.432 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:26.432 #define SPDK_CONFIG_COVERAGE 1 00:09:26.432 #define SPDK_CONFIG_CROSS_PREFIX 00:09:26.432 #undef SPDK_CONFIG_CRYPTO 00:09:26.432 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:26.432 #undef SPDK_CONFIG_CUSTOMOCF 00:09:26.432 #undef SPDK_CONFIG_DAOS 00:09:26.432 #define SPDK_CONFIG_DAOS_DIR 00:09:26.432 #define SPDK_CONFIG_DEBUG 1 00:09:26.432 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:26.432 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:26.432 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:26.432 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:26.432 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:26.432 #undef SPDK_CONFIG_DPDK_UADK 00:09:26.432 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:26.432 #define SPDK_CONFIG_EXAMPLES 1 00:09:26.432 #undef SPDK_CONFIG_FC 00:09:26.432 #define SPDK_CONFIG_FC_PATH 00:09:26.432 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:26.432 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:26.432 #define SPDK_CONFIG_FSDEV 1 00:09:26.432 #undef SPDK_CONFIG_FUSE 00:09:26.432 #undef SPDK_CONFIG_FUZZER 00:09:26.432 #define SPDK_CONFIG_FUZZER_LIB 00:09:26.432 #undef SPDK_CONFIG_GOLANG 00:09:26.432 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:26.432 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:26.432 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:26.432 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:26.432 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:26.432 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:26.432 #undef SPDK_CONFIG_HAVE_LZ4 00:09:26.432 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:26.432 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:26.432 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:26.432 #define SPDK_CONFIG_IDXD 1 00:09:26.432 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:26.432 #undef SPDK_CONFIG_IPSEC_MB 00:09:26.432 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:26.432 #define SPDK_CONFIG_ISAL 1 00:09:26.432 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:26.432 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:26.432 #define SPDK_CONFIG_LIBDIR 00:09:26.432 #undef SPDK_CONFIG_LTO 00:09:26.432 #define SPDK_CONFIG_MAX_LCORES 128 00:09:26.432 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:26.432 #define SPDK_CONFIG_NVME_CUSE 1 00:09:26.432 #undef SPDK_CONFIG_OCF 00:09:26.432 #define SPDK_CONFIG_OCF_PATH 00:09:26.432 #define SPDK_CONFIG_OPENSSL_PATH 00:09:26.432 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:26.432 #define SPDK_CONFIG_PGO_DIR 00:09:26.432 #undef SPDK_CONFIG_PGO_USE 00:09:26.432 #define SPDK_CONFIG_PREFIX /usr/local 00:09:26.432 #undef SPDK_CONFIG_RAID5F 00:09:26.432 #undef SPDK_CONFIG_RBD 00:09:26.432 #define SPDK_CONFIG_RDMA 1 00:09:26.432 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:26.432 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:26.432 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:26.432 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:26.432 #define SPDK_CONFIG_SHARED 1 00:09:26.432 #undef SPDK_CONFIG_SMA 00:09:26.432 #define SPDK_CONFIG_TESTS 1 00:09:26.432 #undef SPDK_CONFIG_TSAN 00:09:26.432 #define SPDK_CONFIG_UBLK 1 00:09:26.432 #define SPDK_CONFIG_UBSAN 1 00:09:26.432 #undef SPDK_CONFIG_UNIT_TESTS 00:09:26.432 #undef SPDK_CONFIG_URING 00:09:26.432 #define SPDK_CONFIG_URING_PATH 00:09:26.433 #undef SPDK_CONFIG_URING_ZNS 00:09:26.433 #undef SPDK_CONFIG_USDT 00:09:26.433 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:26.433 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:26.433 #define SPDK_CONFIG_VFIO_USER 1 00:09:26.433 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:26.433 #define SPDK_CONFIG_VHOST 1 00:09:26.433 #define SPDK_CONFIG_VIRTIO 1 00:09:26.433 #undef SPDK_CONFIG_VTUNE 00:09:26.433 #define SPDK_CONFIG_VTUNE_DIR 00:09:26.433 #define SPDK_CONFIG_WERROR 1 00:09:26.433 #define SPDK_CONFIG_WPDK_DIR 00:09:26.433 #undef SPDK_CONFIG_XNVME 00:09:26.433 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:26.433 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:26.694 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:26.695 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2863200 ]] 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2863200 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Pe9bwW 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Pe9bwW/tests/target /tmp/spdk.Pe9bwW 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50855436288 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988519936 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11133083648 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982893568 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375265280 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22441984 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=29919846400 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1074413568 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:26.696 * Looking for test storage... 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:26.696 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=50855436288 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13347676160 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:26.697 11:28:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:26.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.697 --rc genhtml_branch_coverage=1 00:09:26.697 --rc genhtml_function_coverage=1 00:09:26.697 --rc genhtml_legend=1 00:09:26.697 --rc geninfo_all_blocks=1 00:09:26.697 --rc geninfo_unexecuted_blocks=1 00:09:26.697 00:09:26.697 ' 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:26.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.697 --rc genhtml_branch_coverage=1 00:09:26.697 --rc genhtml_function_coverage=1 00:09:26.697 --rc genhtml_legend=1 00:09:26.697 --rc geninfo_all_blocks=1 00:09:26.697 --rc geninfo_unexecuted_blocks=1 00:09:26.697 00:09:26.697 ' 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:26.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.697 --rc genhtml_branch_coverage=1 00:09:26.697 --rc genhtml_function_coverage=1 00:09:26.697 --rc genhtml_legend=1 00:09:26.697 --rc geninfo_all_blocks=1 00:09:26.697 --rc geninfo_unexecuted_blocks=1 00:09:26.697 00:09:26.697 ' 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:26.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.697 --rc genhtml_branch_coverage=1 00:09:26.697 --rc genhtml_function_coverage=1 00:09:26.697 --rc genhtml_legend=1 00:09:26.697 --rc geninfo_all_blocks=1 00:09:26.697 --rc geninfo_unexecuted_blocks=1 00:09:26.697 00:09:26.697 ' 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.697 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:26.698 11:28:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.230 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.230 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:29.230 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:29.230 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:29.230 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:29.230 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:29.230 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:29.231 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:29.231 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:29.231 Found net devices under 0000:09:00.0: cvl_0_0 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:29.231 Found net devices under 0000:09:00.1: cvl_0_1 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:29.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:09:29.231 00:09:29.231 --- 10.0.0.2 ping statistics --- 00:09:29.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.231 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:09:29.231 00:09:29.231 --- 10.0.0.1 ping statistics --- 00:09:29.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.231 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.231 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.232 ************************************ 00:09:29.232 START TEST nvmf_filesystem_no_in_capsule 00:09:29.232 ************************************ 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2864849 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2864849 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2864849 ']' 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.232 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.232 [2024-11-15 11:28:09.519807] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:09:29.232 [2024-11-15 11:28:09.519885] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.232 [2024-11-15 11:28:09.592733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.232 [2024-11-15 11:28:09.654654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.232 [2024-11-15 11:28:09.654720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.232 [2024-11-15 11:28:09.654735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.232 [2024-11-15 11:28:09.654748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.232 [2024-11-15 11:28:09.654759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.490 [2024-11-15 11:28:09.656509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.490 [2024-11-15 11:28:09.656572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.490 [2024-11-15 11:28:09.656635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.490 [2024-11-15 11:28:09.656638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.490 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.490 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:29.490 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:29.490 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:29.490 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.490 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.490 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:29.490 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:29.490 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.490 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.490 [2024-11-15 11:28:09.812716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.490 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.490 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:29.490 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.490 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.748 Malloc1 00:09:29.748 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.748 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:29.748 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.748 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.748 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.748 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:29.748 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.748 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.748 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.748 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.748 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.748 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.749 [2024-11-15 11:28:09.986129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.749 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.749 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:29.749 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:29.749 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:29.749 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:29.749 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:29.749 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:29.749 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.749 11:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.749 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.749 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:29.749 { 00:09:29.749 "name": "Malloc1", 00:09:29.749 "aliases": [ 00:09:29.749 "6f7da989-1a69-4a01-a38c-8b9e8d92d42c" 00:09:29.749 ], 00:09:29.749 "product_name": "Malloc disk", 00:09:29.749 "block_size": 512, 00:09:29.749 "num_blocks": 1048576, 00:09:29.749 "uuid": "6f7da989-1a69-4a01-a38c-8b9e8d92d42c", 00:09:29.749 "assigned_rate_limits": { 00:09:29.749 "rw_ios_per_sec": 0, 00:09:29.749 "rw_mbytes_per_sec": 0, 00:09:29.749 "r_mbytes_per_sec": 0, 00:09:29.749 "w_mbytes_per_sec": 0 00:09:29.749 }, 00:09:29.749 "claimed": true, 00:09:29.749 "claim_type": "exclusive_write", 00:09:29.749 "zoned": false, 00:09:29.749 "supported_io_types": { 00:09:29.749 "read": true, 00:09:29.749 "write": true, 00:09:29.749 "unmap": true, 00:09:29.749 "flush": true, 00:09:29.749 "reset": true, 00:09:29.749 "nvme_admin": false, 00:09:29.749 "nvme_io": false, 00:09:29.749 "nvme_io_md": false, 00:09:29.749 "write_zeroes": true, 00:09:29.749 "zcopy": true, 00:09:29.749 "get_zone_info": false, 00:09:29.749 "zone_management": false, 00:09:29.749 "zone_append": false, 00:09:29.749 "compare": false, 00:09:29.749 "compare_and_write": false, 00:09:29.749 "abort": true, 00:09:29.749 "seek_hole": false, 00:09:29.749 "seek_data": false, 00:09:29.749 "copy": true, 00:09:29.749 "nvme_iov_md": false 00:09:29.749 }, 00:09:29.749 "memory_domains": [ 00:09:29.749 { 00:09:29.749 "dma_device_id": "system", 00:09:29.749 "dma_device_type": 1 00:09:29.749 }, 00:09:29.749 { 00:09:29.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.749 "dma_device_type": 2 00:09:29.749 } 00:09:29.749 ], 00:09:29.749 "driver_specific": {} 00:09:29.749 } 00:09:29.749 ]' 00:09:29.749 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:29.749 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:29.749 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:29.749 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:29.749 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:29.749 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:29.749 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:29.749 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:30.315 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:30.315 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:30.315 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.315 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:30.315 11:28:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:32.849 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:32.849 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:32.849 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.849 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:32.849 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.849 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:32.849 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:32.849 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:32.849 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:32.849 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:32.849 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:32.849 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:32.849 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:32.849 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:32.849 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:32.849 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:32.849 11:28:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:32.849 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:33.414 11:28:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:34.346 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:34.346 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:34.346 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:34.346 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.346 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.346 ************************************ 00:09:34.346 START TEST filesystem_ext4 00:09:34.346 ************************************ 00:09:34.346 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:34.346 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:34.347 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:34.347 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:34.347 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:34.347 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:34.347 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:34.347 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:34.347 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:34.347 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:34.347 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:34.347 mke2fs 1.47.0 (5-Feb-2023) 00:09:34.604 Discarding device blocks: 0/522240 done 00:09:34.604 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:34.604 Filesystem UUID: c50c0843-faa5-4b16-a82a-de3229d38ce5 00:09:34.604 Superblock backups stored on blocks: 00:09:34.604 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:34.604 00:09:34.604 Allocating group tables: 0/64 done 00:09:34.604 Writing inode tables: 0/64 done 00:09:34.604 Creating journal (8192 blocks): done 00:09:34.604 Writing superblocks and filesystem accounting information: 0/64 done 00:09:34.604 00:09:34.604 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:34.604 11:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:41.158 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:41.158 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:41.158 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:41.158 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:41.158 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:41.158 11:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:41.158 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2864849 00:09:41.158 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:41.158 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:41.158 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:41.158 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:41.158 00:09:41.158 real 0m6.311s 00:09:41.158 user 0m0.024s 00:09:41.158 sys 0m0.062s 00:09:41.158 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.158 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:41.158 ************************************ 00:09:41.158 END TEST filesystem_ext4 00:09:41.158 ************************************ 00:09:41.158 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:41.158 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:41.158 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.158 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.158 ************************************ 00:09:41.158 START TEST filesystem_btrfs 00:09:41.159 ************************************ 00:09:41.159 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:41.159 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:41.159 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:41.159 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:41.159 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:41.159 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:41.159 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:41.159 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:41.159 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:41.159 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:41.159 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:41.159 btrfs-progs v6.8.1 00:09:41.159 See https://btrfs.readthedocs.io for more information. 00:09:41.159 00:09:41.159 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:41.159 NOTE: several default settings have changed in version 5.15, please make sure 00:09:41.159 this does not affect your deployments: 00:09:41.159 - DUP for metadata (-m dup) 00:09:41.159 - enabled no-holes (-O no-holes) 00:09:41.159 - enabled free-space-tree (-R free-space-tree) 00:09:41.159 00:09:41.159 Label: (null) 00:09:41.159 UUID: d95b96a3-c5f4-4823-b84d-bee038215c12 00:09:41.159 Node size: 16384 00:09:41.159 Sector size: 4096 (CPU page size: 4096) 00:09:41.159 Filesystem size: 510.00MiB 00:09:41.159 Block group profiles: 00:09:41.159 Data: single 8.00MiB 00:09:41.159 Metadata: DUP 32.00MiB 00:09:41.159 System: DUP 8.00MiB 00:09:41.159 SSD detected: yes 00:09:41.159 Zoned device: no 00:09:41.159 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:41.159 Checksum: crc32c 00:09:41.159 Number of devices: 1 00:09:41.159 Devices: 00:09:41.159 ID SIZE PATH 00:09:41.159 1 510.00MiB /dev/nvme0n1p1 00:09:41.159 00:09:41.159 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:41.159 11:28:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2864849 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:42.090 00:09:42.090 real 0m1.197s 00:09:42.090 user 0m0.023s 00:09:42.090 sys 0m0.108s 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:42.090 ************************************ 00:09:42.090 END TEST filesystem_btrfs 00:09:42.090 ************************************ 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:42.090 ************************************ 00:09:42.090 START TEST filesystem_xfs 00:09:42.090 ************************************ 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:42.090 11:28:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:42.090 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:42.090 = sectsz=512 attr=2, projid32bit=1 00:09:42.090 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:42.090 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:42.090 data = bsize=4096 blocks=130560, imaxpct=25 00:09:42.090 = sunit=0 swidth=0 blks 00:09:42.090 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:42.091 log =internal log bsize=4096 blocks=16384, version=2 00:09:42.091 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:42.091 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:43.463 Discarding blocks...Done. 00:09:43.463 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:43.463 11:28:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:45.985 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:45.985 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:45.985 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:45.985 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:45.985 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:45.985 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:45.985 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2864849 00:09:45.985 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:45.985 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:45.985 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:45.985 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:45.985 00:09:45.985 real 0m3.567s 00:09:45.985 user 0m0.014s 00:09:45.985 sys 0m0.061s 00:09:45.985 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.985 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:45.985 ************************************ 00:09:45.985 END TEST filesystem_xfs 00:09:45.985 ************************************ 00:09:45.985 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:45.985 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:45.985 11:28:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:45.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.985 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:45.985 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:45.985 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:45.985 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.985 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2864849 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2864849 ']' 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2864849 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2864849 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2864849' 00:09:45.986 killing process with pid 2864849 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2864849 00:09:45.986 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2864849 00:09:46.243 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:46.243 00:09:46.243 real 0m17.148s 00:09:46.243 user 1m6.491s 00:09:46.243 sys 0m2.084s 00:09:46.243 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.243 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.243 ************************************ 00:09:46.243 END TEST nvmf_filesystem_no_in_capsule 00:09:46.243 ************************************ 00:09:46.243 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:46.243 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:46.243 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.243 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:46.500 ************************************ 00:09:46.500 START TEST nvmf_filesystem_in_capsule 00:09:46.500 ************************************ 00:09:46.500 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:09:46.500 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:46.500 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:46.500 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:46.500 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:46.500 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.500 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2867078 00:09:46.500 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:46.500 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2867078 00:09:46.500 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2867078 ']' 00:09:46.500 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.500 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.500 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.500 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.500 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.500 [2024-11-15 11:28:26.730029] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:09:46.500 [2024-11-15 11:28:26.730136] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.500 [2024-11-15 11:28:26.803196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.500 [2024-11-15 11:28:26.859551] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.500 [2024-11-15 11:28:26.859606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.500 [2024-11-15 11:28:26.859628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.500 [2024-11-15 11:28:26.859639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.500 [2024-11-15 11:28:26.859649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.500 [2024-11-15 11:28:26.861133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.500 [2024-11-15 11:28:26.861241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.500 [2024-11-15 11:28:26.861331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.500 [2024-11-15 11:28:26.861336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.757 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.757 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:46.757 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:46.757 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:46.757 11:28:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.757 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.757 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:46.757 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:46.757 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.757 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.757 [2024-11-15 11:28:27.008364] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.757 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.757 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:46.757 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.757 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.757 Malloc1 00:09:46.757 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.757 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:46.757 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.757 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.757 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.757 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:46.757 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.757 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.014 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.014 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.014 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.014 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.014 [2024-11-15 11:28:27.188803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.014 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.014 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:47.014 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:47.014 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:47.014 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:47.014 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:47.014 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:47.014 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.015 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.015 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.015 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:47.015 { 00:09:47.015 "name": "Malloc1", 00:09:47.015 "aliases": [ 00:09:47.015 "c0cbe8e4-5b55-429b-82ab-dd5a573f9e80" 00:09:47.015 ], 00:09:47.015 "product_name": "Malloc disk", 00:09:47.015 "block_size": 512, 00:09:47.015 "num_blocks": 1048576, 00:09:47.015 "uuid": "c0cbe8e4-5b55-429b-82ab-dd5a573f9e80", 00:09:47.015 "assigned_rate_limits": { 00:09:47.015 "rw_ios_per_sec": 0, 00:09:47.015 "rw_mbytes_per_sec": 0, 00:09:47.015 "r_mbytes_per_sec": 0, 00:09:47.015 "w_mbytes_per_sec": 0 00:09:47.015 }, 00:09:47.015 "claimed": true, 00:09:47.015 "claim_type": "exclusive_write", 00:09:47.015 "zoned": false, 00:09:47.015 "supported_io_types": { 00:09:47.015 "read": true, 00:09:47.015 "write": true, 00:09:47.015 "unmap": true, 00:09:47.015 "flush": true, 00:09:47.015 "reset": true, 00:09:47.015 "nvme_admin": false, 00:09:47.015 "nvme_io": false, 00:09:47.015 "nvme_io_md": false, 00:09:47.015 "write_zeroes": true, 00:09:47.015 "zcopy": true, 00:09:47.015 "get_zone_info": false, 00:09:47.015 "zone_management": false, 00:09:47.015 "zone_append": false, 00:09:47.015 "compare": false, 00:09:47.015 "compare_and_write": false, 00:09:47.015 "abort": true, 00:09:47.015 "seek_hole": false, 00:09:47.015 "seek_data": false, 00:09:47.015 "copy": true, 00:09:47.015 "nvme_iov_md": false 00:09:47.015 }, 00:09:47.015 "memory_domains": [ 00:09:47.015 { 00:09:47.015 "dma_device_id": "system", 00:09:47.015 "dma_device_type": 1 00:09:47.015 }, 00:09:47.015 { 00:09:47.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.015 "dma_device_type": 2 00:09:47.015 } 00:09:47.015 ], 00:09:47.015 "driver_specific": {} 00:09:47.015 } 00:09:47.015 ]' 00:09:47.015 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:47.015 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:47.015 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:47.015 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:47.015 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:47.015 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:47.015 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:47.015 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:47.628 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:47.628 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:47.628 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:47.628 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:47.628 11:28:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:49.526 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:49.526 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:49.526 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:49.526 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:49.526 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:49.526 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:49.526 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:49.526 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:49.526 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:49.526 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:49.526 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:49.526 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:49.526 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:49.526 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:49.526 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:49.526 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:49.526 11:28:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:49.785 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:50.716 11:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:51.647 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:51.647 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:51.647 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:51.647 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.647 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.647 ************************************ 00:09:51.647 START TEST filesystem_in_capsule_ext4 00:09:51.647 ************************************ 00:09:51.647 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:51.647 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:51.647 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:51.647 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:51.647 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:51.647 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:51.647 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:51.647 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:51.647 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:51.647 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:51.647 11:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:51.647 mke2fs 1.47.0 (5-Feb-2023) 00:09:51.647 Discarding device blocks: 0/522240 done 00:09:51.647 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:51.647 Filesystem UUID: 47077c48-74eb-4e2b-b58c-91460ed0b470 00:09:51.647 Superblock backups stored on blocks: 00:09:51.647 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:51.647 00:09:51.647 Allocating group tables: 0/64 done 00:09:51.647 Writing inode tables: 0/64 done 00:09:52.706 Creating journal (8192 blocks): done 00:09:52.963 Writing superblocks and filesystem accounting information: 0/64 done 00:09:52.963 00:09:52.963 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:52.963 11:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2867078 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:59.516 00:09:59.516 real 0m7.532s 00:09:59.516 user 0m0.015s 00:09:59.516 sys 0m0.070s 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:59.516 ************************************ 00:09:59.516 END TEST filesystem_in_capsule_ext4 00:09:59.516 ************************************ 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.516 ************************************ 00:09:59.516 START TEST filesystem_in_capsule_btrfs 00:09:59.516 ************************************ 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:59.516 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:59.517 btrfs-progs v6.8.1 00:09:59.517 See https://btrfs.readthedocs.io for more information. 00:09:59.517 00:09:59.517 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:59.517 NOTE: several default settings have changed in version 5.15, please make sure 00:09:59.517 this does not affect your deployments: 00:09:59.517 - DUP for metadata (-m dup) 00:09:59.517 - enabled no-holes (-O no-holes) 00:09:59.517 - enabled free-space-tree (-R free-space-tree) 00:09:59.517 00:09:59.517 Label: (null) 00:09:59.517 UUID: a288b874-b480-49af-aa34-8040ac76bf52 00:09:59.517 Node size: 16384 00:09:59.517 Sector size: 4096 (CPU page size: 4096) 00:09:59.517 Filesystem size: 510.00MiB 00:09:59.517 Block group profiles: 00:09:59.517 Data: single 8.00MiB 00:09:59.517 Metadata: DUP 32.00MiB 00:09:59.517 System: DUP 8.00MiB 00:09:59.517 SSD detected: yes 00:09:59.517 Zoned device: no 00:09:59.517 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:59.517 Checksum: crc32c 00:09:59.517 Number of devices: 1 00:09:59.517 Devices: 00:09:59.517 ID SIZE PATH 00:09:59.517 1 510.00MiB /dev/nvme0n1p1 00:09:59.517 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2867078 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:59.517 00:09:59.517 real 0m0.500s 00:09:59.517 user 0m0.015s 00:09:59.517 sys 0m0.099s 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:59.517 ************************************ 00:09:59.517 END TEST filesystem_in_capsule_btrfs 00:09:59.517 ************************************ 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.517 ************************************ 00:09:59.517 START TEST filesystem_in_capsule_xfs 00:09:59.517 ************************************ 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:59.517 11:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:59.774 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:59.775 = sectsz=512 attr=2, projid32bit=1 00:09:59.775 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:59.775 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:59.775 data = bsize=4096 blocks=130560, imaxpct=25 00:09:59.775 = sunit=0 swidth=0 blks 00:09:59.775 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:59.775 log =internal log bsize=4096 blocks=16384, version=2 00:09:59.775 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:59.775 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:00.707 Discarding blocks...Done. 00:10:00.707 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:00.707 11:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2867078 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:03.232 00:10:03.232 real 0m3.547s 00:10:03.232 user 0m0.018s 00:10:03.232 sys 0m0.060s 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:03.232 ************************************ 00:10:03.232 END TEST filesystem_in_capsule_xfs 00:10:03.232 ************************************ 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:03.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.232 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:03.233 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.233 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.233 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:03.233 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.233 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:03.233 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2867078 00:10:03.233 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2867078 ']' 00:10:03.233 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2867078 00:10:03.233 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:03.233 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.233 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2867078 00:10:03.490 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.490 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.491 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2867078' 00:10:03.491 killing process with pid 2867078 00:10:03.491 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2867078 00:10:03.491 11:28:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2867078 00:10:03.749 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:03.749 00:10:03.749 real 0m17.419s 00:10:03.749 user 1m7.414s 00:10:03.749 sys 0m2.187s 00:10:03.749 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.749 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:03.749 ************************************ 00:10:03.749 END TEST nvmf_filesystem_in_capsule 00:10:03.749 ************************************ 00:10:03.749 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:03.749 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:03.749 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:03.749 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.749 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:03.749 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.749 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.749 rmmod nvme_tcp 00:10:03.749 rmmod nvme_fabrics 00:10:03.749 rmmod nvme_keyring 00:10:04.007 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:04.007 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:04.007 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:04.007 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:04.007 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:04.007 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:04.008 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:04.008 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:04.008 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:04.008 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:04.008 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:04.008 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:04.008 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:04.008 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.008 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.008 11:28:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.911 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:05.911 00:10:05.911 real 0m39.578s 00:10:05.911 user 2m15.061s 00:10:05.911 sys 0m6.135s 00:10:05.911 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.911 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:05.911 ************************************ 00:10:05.911 END TEST nvmf_filesystem 00:10:05.911 ************************************ 00:10:05.911 11:28:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:05.911 11:28:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:05.911 11:28:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.911 11:28:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:05.911 ************************************ 00:10:05.911 START TEST nvmf_target_discovery 00:10:05.911 ************************************ 00:10:05.911 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:05.911 * Looking for test storage... 00:10:06.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.170 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:06.170 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:06.170 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:06.170 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:06.170 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.170 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.170 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.170 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.170 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.170 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.170 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.170 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.170 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.170 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.170 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.170 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:06.170 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:06.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.171 --rc genhtml_branch_coverage=1 00:10:06.171 --rc genhtml_function_coverage=1 00:10:06.171 --rc genhtml_legend=1 00:10:06.171 --rc geninfo_all_blocks=1 00:10:06.171 --rc geninfo_unexecuted_blocks=1 00:10:06.171 00:10:06.171 ' 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:06.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.171 --rc genhtml_branch_coverage=1 00:10:06.171 --rc genhtml_function_coverage=1 00:10:06.171 --rc genhtml_legend=1 00:10:06.171 --rc geninfo_all_blocks=1 00:10:06.171 --rc geninfo_unexecuted_blocks=1 00:10:06.171 00:10:06.171 ' 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:06.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.171 --rc genhtml_branch_coverage=1 00:10:06.171 --rc genhtml_function_coverage=1 00:10:06.171 --rc genhtml_legend=1 00:10:06.171 --rc geninfo_all_blocks=1 00:10:06.171 --rc geninfo_unexecuted_blocks=1 00:10:06.171 00:10:06.171 ' 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:06.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.171 --rc genhtml_branch_coverage=1 00:10:06.171 --rc genhtml_function_coverage=1 00:10:06.171 --rc genhtml_legend=1 00:10:06.171 --rc geninfo_all_blocks=1 00:10:06.171 --rc geninfo_unexecuted_blocks=1 00:10:06.171 00:10:06.171 ' 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:06.171 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:06.172 11:28:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:08.702 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:08.702 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:08.702 Found net devices under 0000:09:00.0: cvl_0_0 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:08.702 Found net devices under 0000:09:00.1: cvl_0_1 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.702 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:08.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:10:08.703 00:10:08.703 --- 10.0.0.2 ping statistics --- 00:10:08.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.703 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:10:08.703 00:10:08.703 --- 10.0.0.1 ping statistics --- 00:10:08.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.703 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2871363 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2871363 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2871363 ']' 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.703 11:28:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.703 [2024-11-15 11:28:48.875565] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:10:08.703 [2024-11-15 11:28:48.875675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.703 [2024-11-15 11:28:48.948751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.703 [2024-11-15 11:28:49.005324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.703 [2024-11-15 11:28:49.005381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.703 [2024-11-15 11:28:49.005406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.703 [2024-11-15 11:28:49.005417] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.703 [2024-11-15 11:28:49.005426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.703 [2024-11-15 11:28:49.007005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.703 [2024-11-15 11:28:49.007113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.703 [2024-11-15 11:28:49.007206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.703 [2024-11-15 11:28:49.007209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.703 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.703 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:08.703 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:08.703 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:08.703 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.960 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.960 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:08.960 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.960 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.960 [2024-11-15 11:28:49.145718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.960 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.960 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:08.960 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 Null1 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 [2024-11-15 11:28:49.186004] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 Null2 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 Null3 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 Null4 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.961 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:10:09.219 00:10:09.219 Discovery Log Number of Records 6, Generation counter 6 00:10:09.219 =====Discovery Log Entry 0====== 00:10:09.219 trtype: tcp 00:10:09.219 adrfam: ipv4 00:10:09.219 subtype: current discovery subsystem 00:10:09.219 treq: not required 00:10:09.219 portid: 0 00:10:09.219 trsvcid: 4420 00:10:09.219 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:09.219 traddr: 10.0.0.2 00:10:09.219 eflags: explicit discovery connections, duplicate discovery information 00:10:09.219 sectype: none 00:10:09.219 =====Discovery Log Entry 1====== 00:10:09.219 trtype: tcp 00:10:09.219 adrfam: ipv4 00:10:09.219 subtype: nvme subsystem 00:10:09.219 treq: not required 00:10:09.219 portid: 0 00:10:09.219 trsvcid: 4420 00:10:09.219 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:09.219 traddr: 10.0.0.2 00:10:09.219 eflags: none 00:10:09.219 sectype: none 00:10:09.219 =====Discovery Log Entry 2====== 00:10:09.219 trtype: tcp 00:10:09.219 adrfam: ipv4 00:10:09.219 subtype: nvme subsystem 00:10:09.219 treq: not required 00:10:09.219 portid: 0 00:10:09.219 trsvcid: 4420 00:10:09.219 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:09.219 traddr: 10.0.0.2 00:10:09.219 eflags: none 00:10:09.219 sectype: none 00:10:09.219 =====Discovery Log Entry 3====== 00:10:09.219 trtype: tcp 00:10:09.219 adrfam: ipv4 00:10:09.219 subtype: nvme subsystem 00:10:09.219 treq: not required 00:10:09.219 portid: 0 00:10:09.219 trsvcid: 4420 00:10:09.219 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:09.219 traddr: 10.0.0.2 00:10:09.219 eflags: none 00:10:09.219 sectype: none 00:10:09.220 =====Discovery Log Entry 4====== 00:10:09.220 trtype: tcp 00:10:09.220 adrfam: ipv4 00:10:09.220 subtype: nvme subsystem 00:10:09.220 treq: not required 00:10:09.220 portid: 0 00:10:09.220 trsvcid: 4420 00:10:09.220 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:09.220 traddr: 10.0.0.2 00:10:09.220 eflags: none 00:10:09.220 sectype: none 00:10:09.220 =====Discovery Log Entry 5====== 00:10:09.220 trtype: tcp 00:10:09.220 adrfam: ipv4 00:10:09.220 subtype: discovery subsystem referral 00:10:09.220 treq: not required 00:10:09.220 portid: 0 00:10:09.220 trsvcid: 4430 00:10:09.220 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:09.220 traddr: 10.0.0.2 00:10:09.220 eflags: none 00:10:09.220 sectype: none 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:09.220 Perform nvmf subsystem discovery via RPC 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:09.220 [ 00:10:09.220 { 00:10:09.220 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:09.220 "subtype": "Discovery", 00:10:09.220 "listen_addresses": [ 00:10:09.220 { 00:10:09.220 "trtype": "TCP", 00:10:09.220 "adrfam": "IPv4", 00:10:09.220 "traddr": "10.0.0.2", 00:10:09.220 "trsvcid": "4420" 00:10:09.220 } 00:10:09.220 ], 00:10:09.220 "allow_any_host": true, 00:10:09.220 "hosts": [] 00:10:09.220 }, 00:10:09.220 { 00:10:09.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:09.220 "subtype": "NVMe", 00:10:09.220 "listen_addresses": [ 00:10:09.220 { 00:10:09.220 "trtype": "TCP", 00:10:09.220 "adrfam": "IPv4", 00:10:09.220 "traddr": "10.0.0.2", 00:10:09.220 "trsvcid": "4420" 00:10:09.220 } 00:10:09.220 ], 00:10:09.220 "allow_any_host": true, 00:10:09.220 "hosts": [], 00:10:09.220 "serial_number": "SPDK00000000000001", 00:10:09.220 "model_number": "SPDK bdev Controller", 00:10:09.220 "max_namespaces": 32, 00:10:09.220 "min_cntlid": 1, 00:10:09.220 "max_cntlid": 65519, 00:10:09.220 "namespaces": [ 00:10:09.220 { 00:10:09.220 "nsid": 1, 00:10:09.220 "bdev_name": "Null1", 00:10:09.220 "name": "Null1", 00:10:09.220 "nguid": "66EF9CCF56AB4138863F6AFD7BC5F06E", 00:10:09.220 "uuid": "66ef9ccf-56ab-4138-863f-6afd7bc5f06e" 00:10:09.220 } 00:10:09.220 ] 00:10:09.220 }, 00:10:09.220 { 00:10:09.220 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:09.220 "subtype": "NVMe", 00:10:09.220 "listen_addresses": [ 00:10:09.220 { 00:10:09.220 "trtype": "TCP", 00:10:09.220 "adrfam": "IPv4", 00:10:09.220 "traddr": "10.0.0.2", 00:10:09.220 "trsvcid": "4420" 00:10:09.220 } 00:10:09.220 ], 00:10:09.220 "allow_any_host": true, 00:10:09.220 "hosts": [], 00:10:09.220 "serial_number": "SPDK00000000000002", 00:10:09.220 "model_number": "SPDK bdev Controller", 00:10:09.220 "max_namespaces": 32, 00:10:09.220 "min_cntlid": 1, 00:10:09.220 "max_cntlid": 65519, 00:10:09.220 "namespaces": [ 00:10:09.220 { 00:10:09.220 "nsid": 1, 00:10:09.220 "bdev_name": "Null2", 00:10:09.220 "name": "Null2", 00:10:09.220 "nguid": "601BA9FE8996454BBAA6CA6F9239C2F9", 00:10:09.220 "uuid": "601ba9fe-8996-454b-baa6-ca6f9239c2f9" 00:10:09.220 } 00:10:09.220 ] 00:10:09.220 }, 00:10:09.220 { 00:10:09.220 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:09.220 "subtype": "NVMe", 00:10:09.220 "listen_addresses": [ 00:10:09.220 { 00:10:09.220 "trtype": "TCP", 00:10:09.220 "adrfam": "IPv4", 00:10:09.220 "traddr": "10.0.0.2", 00:10:09.220 "trsvcid": "4420" 00:10:09.220 } 00:10:09.220 ], 00:10:09.220 "allow_any_host": true, 00:10:09.220 "hosts": [], 00:10:09.220 "serial_number": "SPDK00000000000003", 00:10:09.220 "model_number": "SPDK bdev Controller", 00:10:09.220 "max_namespaces": 32, 00:10:09.220 "min_cntlid": 1, 00:10:09.220 "max_cntlid": 65519, 00:10:09.220 "namespaces": [ 00:10:09.220 { 00:10:09.220 "nsid": 1, 00:10:09.220 "bdev_name": "Null3", 00:10:09.220 "name": "Null3", 00:10:09.220 "nguid": "3747707CD6DC4389BADFD2019478C471", 00:10:09.220 "uuid": "3747707c-d6dc-4389-badf-d2019478c471" 00:10:09.220 } 00:10:09.220 ] 00:10:09.220 }, 00:10:09.220 { 00:10:09.220 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:09.220 "subtype": "NVMe", 00:10:09.220 "listen_addresses": [ 00:10:09.220 { 00:10:09.220 "trtype": "TCP", 00:10:09.220 "adrfam": "IPv4", 00:10:09.220 "traddr": "10.0.0.2", 00:10:09.220 "trsvcid": "4420" 00:10:09.220 } 00:10:09.220 ], 00:10:09.220 "allow_any_host": true, 00:10:09.220 "hosts": [], 00:10:09.220 "serial_number": "SPDK00000000000004", 00:10:09.220 "model_number": "SPDK bdev Controller", 00:10:09.220 "max_namespaces": 32, 00:10:09.220 "min_cntlid": 1, 00:10:09.220 "max_cntlid": 65519, 00:10:09.220 "namespaces": [ 00:10:09.220 { 00:10:09.220 "nsid": 1, 00:10:09.220 "bdev_name": "Null4", 00:10:09.220 "name": "Null4", 00:10:09.220 "nguid": "554039E96DFF42E4B55C707F95B28598", 00:10:09.220 "uuid": "554039e9-6dff-42e4-b55c-707f95b28598" 00:10:09.220 } 00:10:09.220 ] 00:10:09.220 } 00:10:09.220 ] 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:09.220 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.221 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:09.221 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.221 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:09.221 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:09.221 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:09.221 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:09.221 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:09.221 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:09.221 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.221 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:09.221 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.221 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.479 rmmod nvme_tcp 00:10:09.479 rmmod nvme_fabrics 00:10:09.479 rmmod nvme_keyring 00:10:09.479 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.479 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:09.479 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:09.479 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2871363 ']' 00:10:09.479 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2871363 00:10:09.479 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2871363 ']' 00:10:09.479 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2871363 00:10:09.479 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:09.479 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.479 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2871363 00:10:09.479 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.479 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.479 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2871363' 00:10:09.479 killing process with pid 2871363 00:10:09.479 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2871363 00:10:09.479 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2871363 00:10:09.739 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:09.739 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:09.739 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:09.739 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:09.739 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:09.739 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:09.739 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:09.739 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.739 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.739 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.739 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.739 11:28:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.646 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:11.646 00:10:11.646 real 0m5.728s 00:10:11.646 user 0m4.817s 00:10:11.646 sys 0m2.000s 00:10:11.646 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.646 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:11.646 ************************************ 00:10:11.646 END TEST nvmf_target_discovery 00:10:11.646 ************************************ 00:10:11.646 11:28:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:11.646 11:28:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.646 11:28:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.646 11:28:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:11.906 ************************************ 00:10:11.906 START TEST nvmf_referrals 00:10:11.906 ************************************ 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:11.906 * Looking for test storage... 00:10:11.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:11.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.906 --rc genhtml_branch_coverage=1 00:10:11.906 --rc genhtml_function_coverage=1 00:10:11.906 --rc genhtml_legend=1 00:10:11.906 --rc geninfo_all_blocks=1 00:10:11.906 --rc geninfo_unexecuted_blocks=1 00:10:11.906 00:10:11.906 ' 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:11.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.906 --rc genhtml_branch_coverage=1 00:10:11.906 --rc genhtml_function_coverage=1 00:10:11.906 --rc genhtml_legend=1 00:10:11.906 --rc geninfo_all_blocks=1 00:10:11.906 --rc geninfo_unexecuted_blocks=1 00:10:11.906 00:10:11.906 ' 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:11.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.906 --rc genhtml_branch_coverage=1 00:10:11.906 --rc genhtml_function_coverage=1 00:10:11.906 --rc genhtml_legend=1 00:10:11.906 --rc geninfo_all_blocks=1 00:10:11.906 --rc geninfo_unexecuted_blocks=1 00:10:11.906 00:10:11.906 ' 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:11.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.906 --rc genhtml_branch_coverage=1 00:10:11.906 --rc genhtml_function_coverage=1 00:10:11.906 --rc genhtml_legend=1 00:10:11.906 --rc geninfo_all_blocks=1 00:10:11.906 --rc geninfo_unexecuted_blocks=1 00:10:11.906 00:10:11.906 ' 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.906 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.907 11:28:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:14.441 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:14.441 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:14.441 Found net devices under 0000:09:00.0: cvl_0_0 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:14.441 Found net devices under 0000:09:00.1: cvl_0_1 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.441 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:14.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:10:14.442 00:10:14.442 --- 10.0.0.2 ping statistics --- 00:10:14.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.442 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:10:14.442 00:10:14.442 --- 10.0.0.1 ping statistics --- 00:10:14.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.442 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2873468 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2873468 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2873468 ']' 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.442 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.442 [2024-11-15 11:28:54.705945] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:10:14.442 [2024-11-15 11:28:54.706039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.442 [2024-11-15 11:28:54.776558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.442 [2024-11-15 11:28:54.831078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.442 [2024-11-15 11:28:54.831133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.442 [2024-11-15 11:28:54.831160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.442 [2024-11-15 11:28:54.831171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.442 [2024-11-15 11:28:54.831181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.442 [2024-11-15 11:28:54.832819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.442 [2024-11-15 11:28:54.832889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.442 [2024-11-15 11:28:54.832960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.442 [2024-11-15 11:28:54.832964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.701 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.701 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:14.701 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:14.701 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:14.701 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.701 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.701 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.701 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.701 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.701 [2024-11-15 11:28:54.984244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.701 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.701 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:14.701 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.701 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.701 [2024-11-15 11:28:54.996497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:14.701 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.701 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:14.701 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.701 11:28:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:14.701 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:14.702 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.702 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:14.702 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.702 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.702 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:14.702 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:14.702 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:14.702 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:14.702 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:14.702 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:14.702 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:14.702 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:14.959 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:15.216 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:15.217 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:15.217 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:15.217 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:15.217 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:15.217 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:15.474 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:15.474 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:15.474 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:15.474 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:15.474 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:15.474 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:15.474 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:15.731 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:15.731 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:15.731 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:15.731 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:15.731 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:15.731 11:28:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:15.731 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:15.989 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:15.989 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:15.989 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:15.989 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:15.989 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:15.989 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:15.989 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:16.247 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:16.247 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:16.247 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:16.247 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:16.247 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:16.247 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:16.247 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:16.247 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:16.247 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.247 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:16.247 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.247 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:16.247 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:16.247 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.247 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:16.505 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.505 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:16.505 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:16.505 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:16.505 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:16.505 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:16.505 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:16.505 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:16.505 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:16.505 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:16.505 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:16.505 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:16.505 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.505 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:16.763 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.763 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:16.763 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.763 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.763 rmmod nvme_tcp 00:10:16.763 rmmod nvme_fabrics 00:10:16.763 rmmod nvme_keyring 00:10:16.764 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.764 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:16.764 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:16.764 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2873468 ']' 00:10:16.764 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2873468 00:10:16.764 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2873468 ']' 00:10:16.764 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2873468 00:10:16.764 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:16.764 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.764 11:28:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2873468 00:10:16.764 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:16.764 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:16.764 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2873468' 00:10:16.764 killing process with pid 2873468 00:10:16.764 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2873468 00:10:16.764 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2873468 00:10:17.023 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:17.023 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:17.023 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:17.023 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:17.023 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:17.023 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:17.023 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:17.023 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.023 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:17.023 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.023 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.023 11:28:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.929 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:18.929 00:10:18.929 real 0m7.207s 00:10:18.929 user 0m11.070s 00:10:18.929 sys 0m2.440s 00:10:18.929 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.929 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:18.929 ************************************ 00:10:18.929 END TEST nvmf_referrals 00:10:18.929 ************************************ 00:10:18.929 11:28:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:18.929 11:28:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:18.929 11:28:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.929 11:28:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:18.929 ************************************ 00:10:18.929 START TEST nvmf_connect_disconnect 00:10:18.929 ************************************ 00:10:18.929 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:19.188 * Looking for test storage... 00:10:19.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.188 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:19.188 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:10:19.188 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:19.188 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:19.188 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.188 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.188 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.188 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.188 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.188 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.188 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.188 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.188 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.188 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.188 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.188 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:19.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.189 --rc genhtml_branch_coverage=1 00:10:19.189 --rc genhtml_function_coverage=1 00:10:19.189 --rc genhtml_legend=1 00:10:19.189 --rc geninfo_all_blocks=1 00:10:19.189 --rc geninfo_unexecuted_blocks=1 00:10:19.189 00:10:19.189 ' 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:19.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.189 --rc genhtml_branch_coverage=1 00:10:19.189 --rc genhtml_function_coverage=1 00:10:19.189 --rc genhtml_legend=1 00:10:19.189 --rc geninfo_all_blocks=1 00:10:19.189 --rc geninfo_unexecuted_blocks=1 00:10:19.189 00:10:19.189 ' 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:19.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.189 --rc genhtml_branch_coverage=1 00:10:19.189 --rc genhtml_function_coverage=1 00:10:19.189 --rc genhtml_legend=1 00:10:19.189 --rc geninfo_all_blocks=1 00:10:19.189 --rc geninfo_unexecuted_blocks=1 00:10:19.189 00:10:19.189 ' 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:19.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.189 --rc genhtml_branch_coverage=1 00:10:19.189 --rc genhtml_function_coverage=1 00:10:19.189 --rc genhtml_legend=1 00:10:19.189 --rc geninfo_all_blocks=1 00:10:19.189 --rc geninfo_unexecuted_blocks=1 00:10:19.189 00:10:19.189 ' 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.189 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:19.190 11:28:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:21.721 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.721 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:21.721 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:21.721 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:21.721 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:21.721 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:21.721 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:21.721 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:21.721 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:21.721 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:21.721 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:21.721 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:21.722 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:21.722 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:21.722 Found net devices under 0000:09:00.0: cvl_0_0 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:21.722 Found net devices under 0000:09:00.1: cvl_0_1 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.722 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:21.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:10:21.723 00:10:21.723 --- 10.0.0.2 ping statistics --- 00:10:21.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.723 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:10:21.723 00:10:21.723 --- 10.0.0.1 ping statistics --- 00:10:21.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.723 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2875768 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2875768 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2875768 ']' 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.723 11:29:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:21.723 [2024-11-15 11:29:01.868977] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:10:21.723 [2024-11-15 11:29:01.869067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.723 [2024-11-15 11:29:01.944770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.723 [2024-11-15 11:29:02.003400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.723 [2024-11-15 11:29:02.003461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.723 [2024-11-15 11:29:02.003483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.723 [2024-11-15 11:29:02.003495] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.723 [2024-11-15 11:29:02.003506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.723 [2024-11-15 11:29:02.005059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.723 [2024-11-15 11:29:02.005117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.723 [2024-11-15 11:29:02.005186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.723 [2024-11-15 11:29:02.005189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.723 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.723 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:21.723 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:21.723 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:21.723 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:21.723 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.723 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:21.723 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.723 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:21.723 [2024-11-15 11:29:02.142022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:21.981 [2024-11-15 11:29:02.207377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:21.981 11:29:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:25.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.194 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:36.194 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:36.194 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:36.195 rmmod nvme_tcp 00:10:36.195 rmmod nvme_fabrics 00:10:36.195 rmmod nvme_keyring 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2875768 ']' 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2875768 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2875768 ']' 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2875768 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2875768 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2875768' 00:10:36.195 killing process with pid 2875768 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2875768 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2875768 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.195 11:29:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.099 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:38.099 00:10:38.099 real 0m19.090s 00:10:38.099 user 0m57.175s 00:10:38.099 sys 0m3.357s 00:10:38.099 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.099 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:38.099 ************************************ 00:10:38.099 END TEST nvmf_connect_disconnect 00:10:38.099 ************************************ 00:10:38.099 11:29:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:38.099 11:29:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:38.099 11:29:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.099 11:29:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:38.099 ************************************ 00:10:38.099 START TEST nvmf_multitarget 00:10:38.099 ************************************ 00:10:38.099 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:38.358 * Looking for test storage... 00:10:38.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:38.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.358 --rc genhtml_branch_coverage=1 00:10:38.358 --rc genhtml_function_coverage=1 00:10:38.358 --rc genhtml_legend=1 00:10:38.358 --rc geninfo_all_blocks=1 00:10:38.358 --rc geninfo_unexecuted_blocks=1 00:10:38.358 00:10:38.358 ' 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:38.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.358 --rc genhtml_branch_coverage=1 00:10:38.358 --rc genhtml_function_coverage=1 00:10:38.358 --rc genhtml_legend=1 00:10:38.358 --rc geninfo_all_blocks=1 00:10:38.358 --rc geninfo_unexecuted_blocks=1 00:10:38.358 00:10:38.358 ' 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:38.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.358 --rc genhtml_branch_coverage=1 00:10:38.358 --rc genhtml_function_coverage=1 00:10:38.358 --rc genhtml_legend=1 00:10:38.358 --rc geninfo_all_blocks=1 00:10:38.358 --rc geninfo_unexecuted_blocks=1 00:10:38.358 00:10:38.358 ' 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:38.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.358 --rc genhtml_branch_coverage=1 00:10:38.358 --rc genhtml_function_coverage=1 00:10:38.358 --rc genhtml_legend=1 00:10:38.358 --rc geninfo_all_blocks=1 00:10:38.358 --rc geninfo_unexecuted_blocks=1 00:10:38.358 00:10:38.358 ' 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.358 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:10:38.359 11:29:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.261 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:40.261 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:40.262 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:40.262 Found net devices under 0000:09:00.0: cvl_0_0 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:40.262 Found net devices under 0000:09:00.1: cvl_0_1 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.262 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:10:40.521 00:10:40.521 --- 10.0.0.2 ping statistics --- 00:10:40.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.521 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:10:40.521 00:10:40.521 --- 10.0.0.1 ping statistics --- 00:10:40.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.521 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2879535 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2879535 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2879535 ']' 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.521 11:29:20 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:40.521 [2024-11-15 11:29:20.862546] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:10:40.521 [2024-11-15 11:29:20.862651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.521 [2024-11-15 11:29:20.934971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.779 [2024-11-15 11:29:20.994402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.779 [2024-11-15 11:29:20.994448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.779 [2024-11-15 11:29:20.994476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.779 [2024-11-15 11:29:20.994487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.779 [2024-11-15 11:29:20.994497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.779 [2024-11-15 11:29:20.996053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.779 [2024-11-15 11:29:20.996144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.779 [2024-11-15 11:29:20.996216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.779 [2024-11-15 11:29:20.996220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.779 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.779 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:10:40.779 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:40.779 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:40.779 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:40.779 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.779 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:40.779 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:40.779 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:41.038 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:41.038 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:41.038 "nvmf_tgt_1" 00:10:41.038 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:41.296 "nvmf_tgt_2" 00:10:41.296 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:41.296 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:41.296 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:41.296 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:41.553 true 00:10:41.553 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:41.553 true 00:10:41.553 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:41.553 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:41.811 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:41.811 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:41.811 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:41.811 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.811 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:41.811 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.812 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:41.812 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.812 11:29:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.812 rmmod nvme_tcp 00:10:41.812 rmmod nvme_fabrics 00:10:41.812 rmmod nvme_keyring 00:10:41.812 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.812 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:41.812 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:41.812 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2879535 ']' 00:10:41.812 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2879535 00:10:41.812 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2879535 ']' 00:10:41.812 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2879535 00:10:41.812 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:10:41.812 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.812 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2879535 00:10:41.812 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.812 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.812 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2879535' 00:10:41.812 killing process with pid 2879535 00:10:41.812 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2879535 00:10:41.812 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2879535 00:10:42.070 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:42.070 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:42.070 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:42.070 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:10:42.070 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:10:42.070 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:42.070 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:10:42.070 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.070 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:42.070 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.070 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.070 11:29:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.972 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.973 00:10:43.973 real 0m5.866s 00:10:43.973 user 0m6.795s 00:10:43.973 sys 0m1.968s 00:10:43.973 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.973 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:43.973 ************************************ 00:10:43.973 END TEST nvmf_multitarget 00:10:43.973 ************************************ 00:10:43.973 11:29:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:43.973 11:29:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.973 11:29:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.973 11:29:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:43.973 ************************************ 00:10:43.973 START TEST nvmf_rpc 00:10:43.973 ************************************ 00:10:43.973 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:44.231 * Looking for test storage... 00:10:44.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:44.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.231 --rc genhtml_branch_coverage=1 00:10:44.231 --rc genhtml_function_coverage=1 00:10:44.231 --rc genhtml_legend=1 00:10:44.231 --rc geninfo_all_blocks=1 00:10:44.231 --rc geninfo_unexecuted_blocks=1 00:10:44.231 00:10:44.231 ' 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:44.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.231 --rc genhtml_branch_coverage=1 00:10:44.231 --rc genhtml_function_coverage=1 00:10:44.231 --rc genhtml_legend=1 00:10:44.231 --rc geninfo_all_blocks=1 00:10:44.231 --rc geninfo_unexecuted_blocks=1 00:10:44.231 00:10:44.231 ' 00:10:44.231 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:44.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.232 --rc genhtml_branch_coverage=1 00:10:44.232 --rc genhtml_function_coverage=1 00:10:44.232 --rc genhtml_legend=1 00:10:44.232 --rc geninfo_all_blocks=1 00:10:44.232 --rc geninfo_unexecuted_blocks=1 00:10:44.232 00:10:44.232 ' 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:44.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.232 --rc genhtml_branch_coverage=1 00:10:44.232 --rc genhtml_function_coverage=1 00:10:44.232 --rc genhtml_legend=1 00:10:44.232 --rc geninfo_all_blocks=1 00:10:44.232 --rc geninfo_unexecuted_blocks=1 00:10:44.232 00:10:44.232 ' 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.232 11:29:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.764 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:46.765 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:46.765 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:46.765 Found net devices under 0000:09:00.0: cvl_0_0 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:46.765 Found net devices under 0000:09:00.1: cvl_0_1 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:46.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:10:46.765 00:10:46.765 --- 10.0.0.2 ping statistics --- 00:10:46.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.765 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:10:46.765 00:10:46.765 --- 10.0.0.1 ping statistics --- 00:10:46.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.765 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2881640 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2881640 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2881640 ']' 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.765 11:29:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.765 [2024-11-15 11:29:26.974924] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:10:46.766 [2024-11-15 11:29:26.975015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.766 [2024-11-15 11:29:27.048780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.766 [2024-11-15 11:29:27.109682] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.766 [2024-11-15 11:29:27.109731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.766 [2024-11-15 11:29:27.109759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.766 [2024-11-15 11:29:27.109771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.766 [2024-11-15 11:29:27.109781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.766 [2024-11-15 11:29:27.111372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.766 [2024-11-15 11:29:27.111430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.766 [2024-11-15 11:29:27.111498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.766 [2024-11-15 11:29:27.111502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:47.024 "tick_rate": 2700000000, 00:10:47.024 "poll_groups": [ 00:10:47.024 { 00:10:47.024 "name": "nvmf_tgt_poll_group_000", 00:10:47.024 "admin_qpairs": 0, 00:10:47.024 "io_qpairs": 0, 00:10:47.024 "current_admin_qpairs": 0, 00:10:47.024 "current_io_qpairs": 0, 00:10:47.024 "pending_bdev_io": 0, 00:10:47.024 "completed_nvme_io": 0, 00:10:47.024 "transports": [] 00:10:47.024 }, 00:10:47.024 { 00:10:47.024 "name": "nvmf_tgt_poll_group_001", 00:10:47.024 "admin_qpairs": 0, 00:10:47.024 "io_qpairs": 0, 00:10:47.024 "current_admin_qpairs": 0, 00:10:47.024 "current_io_qpairs": 0, 00:10:47.024 "pending_bdev_io": 0, 00:10:47.024 "completed_nvme_io": 0, 00:10:47.024 "transports": [] 00:10:47.024 }, 00:10:47.024 { 00:10:47.024 "name": "nvmf_tgt_poll_group_002", 00:10:47.024 "admin_qpairs": 0, 00:10:47.024 "io_qpairs": 0, 00:10:47.024 "current_admin_qpairs": 0, 00:10:47.024 "current_io_qpairs": 0, 00:10:47.024 "pending_bdev_io": 0, 00:10:47.024 "completed_nvme_io": 0, 00:10:47.024 "transports": [] 00:10:47.024 }, 00:10:47.024 { 00:10:47.024 "name": "nvmf_tgt_poll_group_003", 00:10:47.024 "admin_qpairs": 0, 00:10:47.024 "io_qpairs": 0, 00:10:47.024 "current_admin_qpairs": 0, 00:10:47.024 "current_io_qpairs": 0, 00:10:47.024 "pending_bdev_io": 0, 00:10:47.024 "completed_nvme_io": 0, 00:10:47.024 "transports": [] 00:10:47.024 } 00:10:47.024 ] 00:10:47.024 }' 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.024 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.024 [2024-11-15 11:29:27.343021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:47.025 "tick_rate": 2700000000, 00:10:47.025 "poll_groups": [ 00:10:47.025 { 00:10:47.025 "name": "nvmf_tgt_poll_group_000", 00:10:47.025 "admin_qpairs": 0, 00:10:47.025 "io_qpairs": 0, 00:10:47.025 "current_admin_qpairs": 0, 00:10:47.025 "current_io_qpairs": 0, 00:10:47.025 "pending_bdev_io": 0, 00:10:47.025 "completed_nvme_io": 0, 00:10:47.025 "transports": [ 00:10:47.025 { 00:10:47.025 "trtype": "TCP" 00:10:47.025 } 00:10:47.025 ] 00:10:47.025 }, 00:10:47.025 { 00:10:47.025 "name": "nvmf_tgt_poll_group_001", 00:10:47.025 "admin_qpairs": 0, 00:10:47.025 "io_qpairs": 0, 00:10:47.025 "current_admin_qpairs": 0, 00:10:47.025 "current_io_qpairs": 0, 00:10:47.025 "pending_bdev_io": 0, 00:10:47.025 "completed_nvme_io": 0, 00:10:47.025 "transports": [ 00:10:47.025 { 00:10:47.025 "trtype": "TCP" 00:10:47.025 } 00:10:47.025 ] 00:10:47.025 }, 00:10:47.025 { 00:10:47.025 "name": "nvmf_tgt_poll_group_002", 00:10:47.025 "admin_qpairs": 0, 00:10:47.025 "io_qpairs": 0, 00:10:47.025 "current_admin_qpairs": 0, 00:10:47.025 "current_io_qpairs": 0, 00:10:47.025 "pending_bdev_io": 0, 00:10:47.025 "completed_nvme_io": 0, 00:10:47.025 "transports": [ 00:10:47.025 { 00:10:47.025 "trtype": "TCP" 00:10:47.025 } 00:10:47.025 ] 00:10:47.025 }, 00:10:47.025 { 00:10:47.025 "name": "nvmf_tgt_poll_group_003", 00:10:47.025 "admin_qpairs": 0, 00:10:47.025 "io_qpairs": 0, 00:10:47.025 "current_admin_qpairs": 0, 00:10:47.025 "current_io_qpairs": 0, 00:10:47.025 "pending_bdev_io": 0, 00:10:47.025 "completed_nvme_io": 0, 00:10:47.025 "transports": [ 00:10:47.025 { 00:10:47.025 "trtype": "TCP" 00:10:47.025 } 00:10:47.025 ] 00:10:47.025 } 00:10:47.025 ] 00:10:47.025 }' 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.025 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.284 Malloc1 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.284 [2024-11-15 11:29:27.500537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:10:47.284 [2024-11-15 11:29:27.523170] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:10:47.284 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:47.284 could not add new controller: failed to write to nvme-fabrics device 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.284 11:29:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:47.849 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:47.849 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:47.849 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:47.849 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:47.849 11:29:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:50.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:50.377 [2024-11-15 11:29:30.372586] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:10:50.377 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:50.377 could not add new controller: failed to write to nvme-fabrics device 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.377 11:29:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:50.943 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:50.943 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:50.943 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:50.943 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:50.943 11:29:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:52.840 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:52.840 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:52.840 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:52.840 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:52.840 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:52.840 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:52.840 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:52.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.840 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:52.840 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:52.840 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:52.840 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.840 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:52.840 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.840 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:52.840 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.840 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.840 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.841 [2024-11-15 11:29:33.206714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.841 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:53.833 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:53.833 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:53.833 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:53.833 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:53.833 11:29:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:55.731 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:55.731 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:55.731 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:55.731 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:55.731 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:55.731 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:55.731 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.731 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.731 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:55.731 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:55.731 11:29:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.731 [2024-11-15 11:29:36.041762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.731 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:56.297 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.297 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:56.297 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.297 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:56.297 11:29:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:58.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.822 [2024-11-15 11:29:38.837586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.822 11:29:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.079 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:59.079 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:59.079 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:59.079 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:59.079 11:29:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:01.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.606 [2024-11-15 11:29:41.628788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.606 11:29:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:01.864 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:01.864 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:01.864 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:01.864 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:01.864 11:29:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:04.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.392 [2024-11-15 11:29:44.398295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.392 11:29:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:04.650 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:04.650 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:04.650 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.650 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:04.650 11:29:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:07.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.176 [2024-11-15 11:29:47.145539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.176 [2024-11-15 11:29:47.193565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.176 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 [2024-11-15 11:29:47.241716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 [2024-11-15 11:29:47.289858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 [2024-11-15 11:29:47.338022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.177 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:07.177 "tick_rate": 2700000000, 00:11:07.177 "poll_groups": [ 00:11:07.177 { 00:11:07.177 "name": "nvmf_tgt_poll_group_000", 00:11:07.177 "admin_qpairs": 2, 00:11:07.177 "io_qpairs": 84, 00:11:07.177 "current_admin_qpairs": 0, 00:11:07.177 "current_io_qpairs": 0, 00:11:07.177 "pending_bdev_io": 0, 00:11:07.177 "completed_nvme_io": 136, 00:11:07.177 "transports": [ 00:11:07.177 { 00:11:07.177 "trtype": "TCP" 00:11:07.177 } 00:11:07.177 ] 00:11:07.177 }, 00:11:07.177 { 00:11:07.177 "name": "nvmf_tgt_poll_group_001", 00:11:07.177 "admin_qpairs": 2, 00:11:07.177 "io_qpairs": 84, 00:11:07.177 "current_admin_qpairs": 0, 00:11:07.177 "current_io_qpairs": 0, 00:11:07.177 "pending_bdev_io": 0, 00:11:07.177 "completed_nvme_io": 185, 00:11:07.177 "transports": [ 00:11:07.177 { 00:11:07.177 "trtype": "TCP" 00:11:07.177 } 00:11:07.177 ] 00:11:07.177 }, 00:11:07.177 { 00:11:07.177 "name": "nvmf_tgt_poll_group_002", 00:11:07.178 "admin_qpairs": 1, 00:11:07.178 "io_qpairs": 84, 00:11:07.178 "current_admin_qpairs": 0, 00:11:07.178 "current_io_qpairs": 0, 00:11:07.178 "pending_bdev_io": 0, 00:11:07.178 "completed_nvme_io": 182, 00:11:07.178 "transports": [ 00:11:07.178 { 00:11:07.178 "trtype": "TCP" 00:11:07.178 } 00:11:07.178 ] 00:11:07.178 }, 00:11:07.178 { 00:11:07.178 "name": "nvmf_tgt_poll_group_003", 00:11:07.178 "admin_qpairs": 2, 00:11:07.178 "io_qpairs": 84, 00:11:07.178 "current_admin_qpairs": 0, 00:11:07.178 "current_io_qpairs": 0, 00:11:07.178 "pending_bdev_io": 0, 00:11:07.178 "completed_nvme_io": 183, 00:11:07.178 "transports": [ 00:11:07.178 { 00:11:07.178 "trtype": "TCP" 00:11:07.178 } 00:11:07.178 ] 00:11:07.178 } 00:11:07.178 ] 00:11:07.178 }' 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:07.178 rmmod nvme_tcp 00:11:07.178 rmmod nvme_fabrics 00:11:07.178 rmmod nvme_keyring 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2881640 ']' 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2881640 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2881640 ']' 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2881640 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2881640 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2881640' 00:11:07.178 killing process with pid 2881640 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2881640 00:11:07.178 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2881640 00:11:07.436 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:07.436 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:07.436 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:07.436 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:07.437 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:07.437 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:07.437 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:07.437 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:07.437 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:07.437 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.437 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.437 11:29:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.973 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:09.973 00:11:09.973 real 0m25.468s 00:11:09.973 user 1m22.326s 00:11:09.973 sys 0m4.233s 00:11:09.973 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.973 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:09.973 ************************************ 00:11:09.973 END TEST nvmf_rpc 00:11:09.973 ************************************ 00:11:09.973 11:29:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:09.973 11:29:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:09.973 11:29:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.973 11:29:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:09.973 ************************************ 00:11:09.973 START TEST nvmf_invalid 00:11:09.973 ************************************ 00:11:09.973 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:09.973 * Looking for test storage... 00:11:09.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.973 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:09.973 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:11:09.973 11:29:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.973 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:09.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.974 --rc genhtml_branch_coverage=1 00:11:09.974 --rc genhtml_function_coverage=1 00:11:09.974 --rc genhtml_legend=1 00:11:09.974 --rc geninfo_all_blocks=1 00:11:09.974 --rc geninfo_unexecuted_blocks=1 00:11:09.974 00:11:09.974 ' 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:09.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.974 --rc genhtml_branch_coverage=1 00:11:09.974 --rc genhtml_function_coverage=1 00:11:09.974 --rc genhtml_legend=1 00:11:09.974 --rc geninfo_all_blocks=1 00:11:09.974 --rc geninfo_unexecuted_blocks=1 00:11:09.974 00:11:09.974 ' 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:09.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.974 --rc genhtml_branch_coverage=1 00:11:09.974 --rc genhtml_function_coverage=1 00:11:09.974 --rc genhtml_legend=1 00:11:09.974 --rc geninfo_all_blocks=1 00:11:09.974 --rc geninfo_unexecuted_blocks=1 00:11:09.974 00:11:09.974 ' 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:09.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.974 --rc genhtml_branch_coverage=1 00:11:09.974 --rc genhtml_function_coverage=1 00:11:09.974 --rc genhtml_legend=1 00:11:09.974 --rc geninfo_all_blocks=1 00:11:09.974 --rc geninfo_unexecuted_blocks=1 00:11:09.974 00:11:09.974 ' 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.974 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.975 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.975 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.975 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:09.975 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:09.975 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.975 11:29:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:11.878 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:11.878 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:11.878 Found net devices under 0000:09:00.0: cvl_0_0 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:11.878 Found net devices under 0000:09:00.1: cvl_0_1 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:11.878 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:12.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:11:12.138 00:11:12.138 --- 10.0.0.2 ping statistics --- 00:11:12.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.138 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:12.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:11:12.138 00:11:12.138 --- 10.0.0.1 ping statistics --- 00:11:12.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.138 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2886149 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2886149 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2886149 ']' 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.138 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:12.138 [2024-11-15 11:29:52.464413] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:11:12.138 [2024-11-15 11:29:52.464494] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.138 [2024-11-15 11:29:52.537248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.396 [2024-11-15 11:29:52.598978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.396 [2024-11-15 11:29:52.599027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.396 [2024-11-15 11:29:52.599040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.396 [2024-11-15 11:29:52.599052] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.396 [2024-11-15 11:29:52.599062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.396 [2024-11-15 11:29:52.600862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.396 [2024-11-15 11:29:52.600983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.396 [2024-11-15 11:29:52.601057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.396 [2024-11-15 11:29:52.601062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.396 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.396 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:12.396 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:12.396 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:12.396 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:12.396 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.396 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:12.396 11:29:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode26643 00:11:12.654 [2024-11-15 11:29:52.991824] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:12.654 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:12.654 { 00:11:12.654 "nqn": "nqn.2016-06.io.spdk:cnode26643", 00:11:12.654 "tgt_name": "foobar", 00:11:12.654 "method": "nvmf_create_subsystem", 00:11:12.654 "req_id": 1 00:11:12.654 } 00:11:12.654 Got JSON-RPC error response 00:11:12.654 response: 00:11:12.654 { 00:11:12.654 "code": -32603, 00:11:12.654 "message": "Unable to find target foobar" 00:11:12.654 }' 00:11:12.654 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:12.654 { 00:11:12.654 "nqn": "nqn.2016-06.io.spdk:cnode26643", 00:11:12.654 "tgt_name": "foobar", 00:11:12.654 "method": "nvmf_create_subsystem", 00:11:12.654 "req_id": 1 00:11:12.654 } 00:11:12.654 Got JSON-RPC error response 00:11:12.654 response: 00:11:12.654 { 00:11:12.654 "code": -32603, 00:11:12.654 "message": "Unable to find target foobar" 00:11:12.654 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:12.654 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:12.654 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17418 00:11:12.913 [2024-11-15 11:29:53.260718] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17418: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:12.913 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:12.913 { 00:11:12.913 "nqn": "nqn.2016-06.io.spdk:cnode17418", 00:11:12.913 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:12.913 "method": "nvmf_create_subsystem", 00:11:12.913 "req_id": 1 00:11:12.913 } 00:11:12.913 Got JSON-RPC error response 00:11:12.913 response: 00:11:12.913 { 00:11:12.913 "code": -32602, 00:11:12.913 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:12.913 }' 00:11:12.913 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:12.913 { 00:11:12.913 "nqn": "nqn.2016-06.io.spdk:cnode17418", 00:11:12.913 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:12.913 "method": "nvmf_create_subsystem", 00:11:12.913 "req_id": 1 00:11:12.913 } 00:11:12.913 Got JSON-RPC error response 00:11:12.913 response: 00:11:12.913 { 00:11:12.913 "code": -32602, 00:11:12.913 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:12.913 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:12.913 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:12.913 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1743 00:11:13.220 [2024-11-15 11:29:53.533634] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1743: invalid model number 'SPDK_Controller' 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:13.220 { 00:11:13.220 "nqn": "nqn.2016-06.io.spdk:cnode1743", 00:11:13.220 "model_number": "SPDK_Controller\u001f", 00:11:13.220 "method": "nvmf_create_subsystem", 00:11:13.220 "req_id": 1 00:11:13.220 } 00:11:13.220 Got JSON-RPC error response 00:11:13.220 response: 00:11:13.220 { 00:11:13.220 "code": -32602, 00:11:13.220 "message": "Invalid MN SPDK_Controller\u001f" 00:11:13.220 }' 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:13.220 { 00:11:13.220 "nqn": "nqn.2016-06.io.spdk:cnode1743", 00:11:13.220 "model_number": "SPDK_Controller\u001f", 00:11:13.220 "method": "nvmf_create_subsystem", 00:11:13.220 "req_id": 1 00:11:13.220 } 00:11:13.220 Got JSON-RPC error response 00:11:13.220 response: 00:11:13.220 { 00:11:13.220 "code": -32602, 00:11:13.220 "message": "Invalid MN SPDK_Controller\u001f" 00:11:13.220 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.220 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:13.221 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ r == \- ]] 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'r??Q3B8%"T"d$ZrL)_?Sp' 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'r??Q3B8%"T"d$ZrL)_?Sp' nqn.2016-06.io.spdk:cnode26938 00:11:13.505 [2024-11-15 11:29:53.882822] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26938: invalid serial number 'r??Q3B8%"T"d$ZrL)_?Sp' 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:13.505 { 00:11:13.505 "nqn": "nqn.2016-06.io.spdk:cnode26938", 00:11:13.505 "serial_number": "r??Q3B8%\"T\"d$ZrL)_?Sp", 00:11:13.505 "method": "nvmf_create_subsystem", 00:11:13.505 "req_id": 1 00:11:13.505 } 00:11:13.505 Got JSON-RPC error response 00:11:13.505 response: 00:11:13.505 { 00:11:13.505 "code": -32602, 00:11:13.505 "message": "Invalid SN r??Q3B8%\"T\"d$ZrL)_?Sp" 00:11:13.505 }' 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:13.505 { 00:11:13.505 "nqn": "nqn.2016-06.io.spdk:cnode26938", 00:11:13.505 "serial_number": "r??Q3B8%\"T\"d$ZrL)_?Sp", 00:11:13.505 "method": "nvmf_create_subsystem", 00:11:13.505 "req_id": 1 00:11:13.505 } 00:11:13.505 Got JSON-RPC error response 00:11:13.505 response: 00:11:13.505 { 00:11:13.505 "code": -32602, 00:11:13.505 "message": "Invalid SN r??Q3B8%\"T\"d$ZrL)_?Sp" 00:11:13.505 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.505 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:13.764 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:13.765 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ Z == \- ]] 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ZS7iy*rKf7QGv$-ItoOg%C0uAA!e:oy>)hH54CN|e' 00:11:13.766 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'ZS7iy*rKf7QGv$-ItoOg%C0uAA!e:oy>)hH54CN|e' nqn.2016-06.io.spdk:cnode29558 00:11:14.023 [2024-11-15 11:29:54.304151] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29558: invalid model number 'ZS7iy*rKf7QGv$-ItoOg%C0uAA!e:oy>)hH54CN|e' 00:11:14.023 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:14.023 { 00:11:14.023 "nqn": "nqn.2016-06.io.spdk:cnode29558", 00:11:14.023 "model_number": "ZS7iy*rKf7QGv$-ItoOg%C0uAA!e:oy>)hH54CN|e", 00:11:14.023 "method": "nvmf_create_subsystem", 00:11:14.023 "req_id": 1 00:11:14.023 } 00:11:14.023 Got JSON-RPC error response 00:11:14.023 response: 00:11:14.023 { 00:11:14.023 "code": -32602, 00:11:14.023 "message": "Invalid MN ZS7iy*rKf7QGv$-ItoOg%C0uAA!e:oy>)hH54CN|e" 00:11:14.023 }' 00:11:14.023 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:14.023 { 00:11:14.023 "nqn": "nqn.2016-06.io.spdk:cnode29558", 00:11:14.023 "model_number": "ZS7iy*rKf7QGv$-ItoOg%C0uAA!e:oy>)hH54CN|e", 00:11:14.023 "method": "nvmf_create_subsystem", 00:11:14.023 "req_id": 1 00:11:14.023 } 00:11:14.023 Got JSON-RPC error response 00:11:14.023 response: 00:11:14.023 { 00:11:14.023 "code": -32602, 00:11:14.023 "message": "Invalid MN ZS7iy*rKf7QGv$-ItoOg%C0uAA!e:oy>)hH54CN|e" 00:11:14.023 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:14.023 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:14.282 [2024-11-15 11:29:54.573128] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.282 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:14.539 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:14.539 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:14.539 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:14.539 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:14.539 11:29:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:14.797 [2024-11-15 11:29:55.134969] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:14.797 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:14.797 { 00:11:14.797 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:14.797 "listen_address": { 00:11:14.797 "trtype": "tcp", 00:11:14.797 "traddr": "", 00:11:14.797 "trsvcid": "4421" 00:11:14.797 }, 00:11:14.797 "method": "nvmf_subsystem_remove_listener", 00:11:14.797 "req_id": 1 00:11:14.797 } 00:11:14.797 Got JSON-RPC error response 00:11:14.797 response: 00:11:14.797 { 00:11:14.797 "code": -32602, 00:11:14.797 "message": "Invalid parameters" 00:11:14.797 }' 00:11:14.797 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:14.797 { 00:11:14.797 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:14.797 "listen_address": { 00:11:14.797 "trtype": "tcp", 00:11:14.797 "traddr": "", 00:11:14.797 "trsvcid": "4421" 00:11:14.797 }, 00:11:14.797 "method": "nvmf_subsystem_remove_listener", 00:11:14.797 "req_id": 1 00:11:14.797 } 00:11:14.797 Got JSON-RPC error response 00:11:14.797 response: 00:11:14.797 { 00:11:14.797 "code": -32602, 00:11:14.797 "message": "Invalid parameters" 00:11:14.797 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:14.797 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15497 -i 0 00:11:15.054 [2024-11-15 11:29:55.399795] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15497: invalid cntlid range [0-65519] 00:11:15.054 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:15.054 { 00:11:15.054 "nqn": "nqn.2016-06.io.spdk:cnode15497", 00:11:15.054 "min_cntlid": 0, 00:11:15.054 "method": "nvmf_create_subsystem", 00:11:15.054 "req_id": 1 00:11:15.054 } 00:11:15.054 Got JSON-RPC error response 00:11:15.054 response: 00:11:15.054 { 00:11:15.054 "code": -32602, 00:11:15.054 "message": "Invalid cntlid range [0-65519]" 00:11:15.054 }' 00:11:15.054 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:15.054 { 00:11:15.054 "nqn": "nqn.2016-06.io.spdk:cnode15497", 00:11:15.054 "min_cntlid": 0, 00:11:15.054 "method": "nvmf_create_subsystem", 00:11:15.054 "req_id": 1 00:11:15.054 } 00:11:15.054 Got JSON-RPC error response 00:11:15.054 response: 00:11:15.054 { 00:11:15.054 "code": -32602, 00:11:15.054 "message": "Invalid cntlid range [0-65519]" 00:11:15.054 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:15.054 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5852 -i 65520 00:11:15.312 [2024-11-15 11:29:55.668706] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5852: invalid cntlid range [65520-65519] 00:11:15.312 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:15.312 { 00:11:15.312 "nqn": "nqn.2016-06.io.spdk:cnode5852", 00:11:15.312 "min_cntlid": 65520, 00:11:15.312 "method": "nvmf_create_subsystem", 00:11:15.312 "req_id": 1 00:11:15.312 } 00:11:15.312 Got JSON-RPC error response 00:11:15.312 response: 00:11:15.312 { 00:11:15.312 "code": -32602, 00:11:15.312 "message": "Invalid cntlid range [65520-65519]" 00:11:15.312 }' 00:11:15.312 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:15.312 { 00:11:15.312 "nqn": "nqn.2016-06.io.spdk:cnode5852", 00:11:15.312 "min_cntlid": 65520, 00:11:15.312 "method": "nvmf_create_subsystem", 00:11:15.312 "req_id": 1 00:11:15.312 } 00:11:15.312 Got JSON-RPC error response 00:11:15.312 response: 00:11:15.312 { 00:11:15.312 "code": -32602, 00:11:15.312 "message": "Invalid cntlid range [65520-65519]" 00:11:15.312 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:15.312 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6153 -I 0 00:11:15.570 [2024-11-15 11:29:55.953707] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6153: invalid cntlid range [1-0] 00:11:15.570 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:15.570 { 00:11:15.570 "nqn": "nqn.2016-06.io.spdk:cnode6153", 00:11:15.570 "max_cntlid": 0, 00:11:15.570 "method": "nvmf_create_subsystem", 00:11:15.570 "req_id": 1 00:11:15.570 } 00:11:15.570 Got JSON-RPC error response 00:11:15.570 response: 00:11:15.570 { 00:11:15.570 "code": -32602, 00:11:15.570 "message": "Invalid cntlid range [1-0]" 00:11:15.570 }' 00:11:15.570 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:15.570 { 00:11:15.570 "nqn": "nqn.2016-06.io.spdk:cnode6153", 00:11:15.570 "max_cntlid": 0, 00:11:15.570 "method": "nvmf_create_subsystem", 00:11:15.570 "req_id": 1 00:11:15.570 } 00:11:15.570 Got JSON-RPC error response 00:11:15.570 response: 00:11:15.570 { 00:11:15.570 "code": -32602, 00:11:15.570 "message": "Invalid cntlid range [1-0]" 00:11:15.570 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:15.570 11:29:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11741 -I 65520 00:11:15.828 [2024-11-15 11:29:56.218547] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11741: invalid cntlid range [1-65520] 00:11:15.828 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:15.828 { 00:11:15.828 "nqn": "nqn.2016-06.io.spdk:cnode11741", 00:11:15.828 "max_cntlid": 65520, 00:11:15.828 "method": "nvmf_create_subsystem", 00:11:15.828 "req_id": 1 00:11:15.828 } 00:11:15.828 Got JSON-RPC error response 00:11:15.828 response: 00:11:15.828 { 00:11:15.828 "code": -32602, 00:11:15.828 "message": "Invalid cntlid range [1-65520]" 00:11:15.828 }' 00:11:15.828 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:15.828 { 00:11:15.828 "nqn": "nqn.2016-06.io.spdk:cnode11741", 00:11:15.828 "max_cntlid": 65520, 00:11:15.828 "method": "nvmf_create_subsystem", 00:11:15.828 "req_id": 1 00:11:15.828 } 00:11:15.828 Got JSON-RPC error response 00:11:15.828 response: 00:11:15.828 { 00:11:15.828 "code": -32602, 00:11:15.828 "message": "Invalid cntlid range [1-65520]" 00:11:15.828 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:15.828 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15165 -i 6 -I 5 00:11:16.086 [2024-11-15 11:29:56.491474] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15165: invalid cntlid range [6-5] 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:16.344 { 00:11:16.344 "nqn": "nqn.2016-06.io.spdk:cnode15165", 00:11:16.344 "min_cntlid": 6, 00:11:16.344 "max_cntlid": 5, 00:11:16.344 "method": "nvmf_create_subsystem", 00:11:16.344 "req_id": 1 00:11:16.344 } 00:11:16.344 Got JSON-RPC error response 00:11:16.344 response: 00:11:16.344 { 00:11:16.344 "code": -32602, 00:11:16.344 "message": "Invalid cntlid range [6-5]" 00:11:16.344 }' 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:16.344 { 00:11:16.344 "nqn": "nqn.2016-06.io.spdk:cnode15165", 00:11:16.344 "min_cntlid": 6, 00:11:16.344 "max_cntlid": 5, 00:11:16.344 "method": "nvmf_create_subsystem", 00:11:16.344 "req_id": 1 00:11:16.344 } 00:11:16.344 Got JSON-RPC error response 00:11:16.344 response: 00:11:16.344 { 00:11:16.344 "code": -32602, 00:11:16.344 "message": "Invalid cntlid range [6-5]" 00:11:16.344 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:16.344 { 00:11:16.344 "name": "foobar", 00:11:16.344 "method": "nvmf_delete_target", 00:11:16.344 "req_id": 1 00:11:16.344 } 00:11:16.344 Got JSON-RPC error response 00:11:16.344 response: 00:11:16.344 { 00:11:16.344 "code": -32602, 00:11:16.344 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:16.344 }' 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:16.344 { 00:11:16.344 "name": "foobar", 00:11:16.344 "method": "nvmf_delete_target", 00:11:16.344 "req_id": 1 00:11:16.344 } 00:11:16.344 Got JSON-RPC error response 00:11:16.344 response: 00:11:16.344 { 00:11:16.344 "code": -32602, 00:11:16.344 "message": "The specified target doesn't exist, cannot delete it." 00:11:16.344 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.344 rmmod nvme_tcp 00:11:16.344 rmmod nvme_fabrics 00:11:16.344 rmmod nvme_keyring 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2886149 ']' 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2886149 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2886149 ']' 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2886149 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2886149 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2886149' 00:11:16.344 killing process with pid 2886149 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2886149 00:11:16.344 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2886149 00:11:16.602 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.602 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.602 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.602 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:11:16.602 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:11:16.602 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.602 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.602 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.602 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.602 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.602 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.602 11:29:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.141 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.141 00:11:19.141 real 0m9.081s 00:11:19.141 user 0m21.426s 00:11:19.141 sys 0m2.583s 00:11:19.141 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.141 11:29:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:19.141 ************************************ 00:11:19.141 END TEST nvmf_invalid 00:11:19.141 ************************************ 00:11:19.141 11:29:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:19.141 11:29:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.141 11:29:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.141 11:29:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.141 ************************************ 00:11:19.141 START TEST nvmf_connect_stress 00:11:19.141 ************************************ 00:11:19.141 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:19.141 * Looking for test storage... 00:11:19.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.141 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:19.141 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:11:19.141 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:19.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.142 --rc genhtml_branch_coverage=1 00:11:19.142 --rc genhtml_function_coverage=1 00:11:19.142 --rc genhtml_legend=1 00:11:19.142 --rc geninfo_all_blocks=1 00:11:19.142 --rc geninfo_unexecuted_blocks=1 00:11:19.142 00:11:19.142 ' 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:19.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.142 --rc genhtml_branch_coverage=1 00:11:19.142 --rc genhtml_function_coverage=1 00:11:19.142 --rc genhtml_legend=1 00:11:19.142 --rc geninfo_all_blocks=1 00:11:19.142 --rc geninfo_unexecuted_blocks=1 00:11:19.142 00:11:19.142 ' 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:19.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.142 --rc genhtml_branch_coverage=1 00:11:19.142 --rc genhtml_function_coverage=1 00:11:19.142 --rc genhtml_legend=1 00:11:19.142 --rc geninfo_all_blocks=1 00:11:19.142 --rc geninfo_unexecuted_blocks=1 00:11:19.142 00:11:19.142 ' 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:19.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.142 --rc genhtml_branch_coverage=1 00:11:19.142 --rc genhtml_function_coverage=1 00:11:19.142 --rc genhtml_legend=1 00:11:19.142 --rc geninfo_all_blocks=1 00:11:19.142 --rc geninfo_unexecuted_blocks=1 00:11:19.142 00:11:19.142 ' 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:19.142 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.143 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.143 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.143 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.143 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.143 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.143 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.143 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.143 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.143 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.143 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.143 11:29:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.043 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.043 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:21.043 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.043 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.043 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.043 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.043 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:21.044 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:21.044 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:21.044 Found net devices under 0000:09:00.0: cvl_0_0 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:21.044 Found net devices under 0000:09:00.1: cvl_0_1 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:21.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:11:21.044 00:11:21.044 --- 10.0.0.2 ping statistics --- 00:11:21.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.044 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:11:21.044 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:11:21.044 00:11:21.045 --- 10.0.0.1 ping statistics --- 00:11:21.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.045 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:11:21.045 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.045 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:11:21.045 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.045 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.045 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:21.045 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:21.045 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.045 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:21.045 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:21.303 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:21.303 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:21.303 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.303 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.303 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2888877 00:11:21.303 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:21.303 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2888877 00:11:21.303 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2888877 ']' 00:11:21.303 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.303 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.303 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.303 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.303 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.303 [2024-11-15 11:30:01.540027] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:11:21.303 [2024-11-15 11:30:01.540123] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.303 [2024-11-15 11:30:01.623866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:21.303 [2024-11-15 11:30:01.684646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.303 [2024-11-15 11:30:01.684720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.303 [2024-11-15 11:30:01.684734] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.303 [2024-11-15 11:30:01.684745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.303 [2024-11-15 11:30:01.684754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.303 [2024-11-15 11:30:01.686270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.303 [2024-11-15 11:30:01.686330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.303 [2024-11-15 11:30:01.686335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.561 [2024-11-15 11:30:01.831465] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.561 [2024-11-15 11:30:01.848682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.561 NULL1 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2889044 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.561 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.562 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.562 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.562 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:21.562 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.562 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.562 11:30:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.819 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.819 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:21.819 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.819 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.819 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.383 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.383 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:22.383 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.383 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.383 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.641 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.641 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:22.641 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.641 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.641 11:30:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.898 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.899 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:22.899 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.899 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.899 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.156 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.156 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:23.156 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.156 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.156 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.414 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.414 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:23.414 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.414 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.414 11:30:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.980 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.980 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:23.980 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.980 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.980 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.238 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.238 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:24.238 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.238 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.238 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.507 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.507 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:24.507 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.507 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.507 11:30:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.769 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.769 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:24.769 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.769 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.769 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.026 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.026 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:25.026 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:25.026 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.027 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.591 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.591 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:25.591 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:25.591 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.591 11:30:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.849 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.849 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:25.849 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:25.849 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.849 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.106 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.106 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:26.106 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.107 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.107 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.364 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.364 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:26.364 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.364 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.364 11:30:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.621 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.621 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:26.621 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.621 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.621 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.187 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.187 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:27.187 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.187 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.187 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.445 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.445 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:27.445 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.445 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.445 11:30:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.703 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.703 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:27.703 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.703 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.703 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.960 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.960 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:27.960 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.960 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.960 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.525 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.525 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:28.525 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.525 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.525 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.783 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.783 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:28.783 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.783 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.783 11:30:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.041 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.041 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:29.041 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.041 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.041 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.298 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.298 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:29.298 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.298 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.298 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.556 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.556 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:29.556 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.556 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.556 11:30:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.120 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.120 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:30.120 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.120 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.120 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.378 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.378 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:30.378 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.378 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.378 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.635 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.635 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:30.635 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.635 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.635 11:30:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.893 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.893 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:30.893 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.894 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.894 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.151 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.151 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:31.151 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.151 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.151 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.717 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.717 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:31.717 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.717 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.717 11:30:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.717 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2889044 00:11:31.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2889044) - No such process 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2889044 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:31.975 rmmod nvme_tcp 00:11:31.975 rmmod nvme_fabrics 00:11:31.975 rmmod nvme_keyring 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2888877 ']' 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2888877 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2888877 ']' 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2888877 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2888877 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2888877' 00:11:31.975 killing process with pid 2888877 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2888877 00:11:31.975 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2888877 00:11:32.233 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:32.233 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:32.233 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:32.233 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:11:32.233 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:32.233 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:32.233 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:32.233 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.233 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:32.233 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.233 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.233 11:30:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.135 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:34.135 00:11:34.135 real 0m15.494s 00:11:34.135 user 0m38.668s 00:11:34.135 sys 0m5.962s 00:11:34.135 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.135 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.135 ************************************ 00:11:34.135 END TEST nvmf_connect_stress 00:11:34.135 ************************************ 00:11:34.395 11:30:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:34.395 11:30:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.395 11:30:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.395 11:30:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.395 ************************************ 00:11:34.395 START TEST nvmf_fused_ordering 00:11:34.395 ************************************ 00:11:34.395 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:34.396 * Looking for test storage... 00:11:34.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:34.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.396 --rc genhtml_branch_coverage=1 00:11:34.396 --rc genhtml_function_coverage=1 00:11:34.396 --rc genhtml_legend=1 00:11:34.396 --rc geninfo_all_blocks=1 00:11:34.396 --rc geninfo_unexecuted_blocks=1 00:11:34.396 00:11:34.396 ' 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:34.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.396 --rc genhtml_branch_coverage=1 00:11:34.396 --rc genhtml_function_coverage=1 00:11:34.396 --rc genhtml_legend=1 00:11:34.396 --rc geninfo_all_blocks=1 00:11:34.396 --rc geninfo_unexecuted_blocks=1 00:11:34.396 00:11:34.396 ' 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:34.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.396 --rc genhtml_branch_coverage=1 00:11:34.396 --rc genhtml_function_coverage=1 00:11:34.396 --rc genhtml_legend=1 00:11:34.396 --rc geninfo_all_blocks=1 00:11:34.396 --rc geninfo_unexecuted_blocks=1 00:11:34.396 00:11:34.396 ' 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:34.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.396 --rc genhtml_branch_coverage=1 00:11:34.396 --rc genhtml_function_coverage=1 00:11:34.396 --rc genhtml_legend=1 00:11:34.396 --rc geninfo_all_blocks=1 00:11:34.396 --rc geninfo_unexecuted_blocks=1 00:11:34.396 00:11:34.396 ' 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:34.396 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:11:34.397 11:30:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:36.932 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:36.932 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:36.932 Found net devices under 0000:09:00.0: cvl_0_0 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:36.932 Found net devices under 0000:09:00.1: cvl_0_1 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.932 11:30:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.932 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.932 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.932 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:36.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:11:36.933 00:11:36.933 --- 10.0.0.2 ping statistics --- 00:11:36.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.933 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:11:36.933 00:11:36.933 --- 10.0.0.1 ping statistics --- 00:11:36.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.933 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2892714 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2892714 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2892714 ']' 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.933 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:36.933 [2024-11-15 11:30:17.157435] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:11:36.933 [2024-11-15 11:30:17.157527] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.933 [2024-11-15 11:30:17.228457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.933 [2024-11-15 11:30:17.281869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.933 [2024-11-15 11:30:17.281925] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.933 [2024-11-15 11:30:17.281953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.933 [2024-11-15 11:30:17.281964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.933 [2024-11-15 11:30:17.281974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.933 [2024-11-15 11:30:17.282526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:37.192 [2024-11-15 11:30:17.418139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:37.192 [2024-11-15 11:30:17.434326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:37.192 NULL1 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.192 11:30:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:37.192 [2024-11-15 11:30:17.478426] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:11:37.192 [2024-11-15 11:30:17.478461] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892801 ] 00:11:37.450 Attached to nqn.2016-06.io.spdk:cnode1 00:11:37.450 Namespace ID: 1 size: 1GB 00:11:37.450 fused_ordering(0) 00:11:37.450 fused_ordering(1) 00:11:37.450 fused_ordering(2) 00:11:37.450 fused_ordering(3) 00:11:37.450 fused_ordering(4) 00:11:37.450 fused_ordering(5) 00:11:37.450 fused_ordering(6) 00:11:37.450 fused_ordering(7) 00:11:37.450 fused_ordering(8) 00:11:37.450 fused_ordering(9) 00:11:37.450 fused_ordering(10) 00:11:37.450 fused_ordering(11) 00:11:37.450 fused_ordering(12) 00:11:37.450 fused_ordering(13) 00:11:37.450 fused_ordering(14) 00:11:37.450 fused_ordering(15) 00:11:37.450 fused_ordering(16) 00:11:37.450 fused_ordering(17) 00:11:37.450 fused_ordering(18) 00:11:37.450 fused_ordering(19) 00:11:37.450 fused_ordering(20) 00:11:37.450 fused_ordering(21) 00:11:37.450 fused_ordering(22) 00:11:37.450 fused_ordering(23) 00:11:37.450 fused_ordering(24) 00:11:37.450 fused_ordering(25) 00:11:37.450 fused_ordering(26) 00:11:37.450 fused_ordering(27) 00:11:37.450 fused_ordering(28) 00:11:37.450 fused_ordering(29) 00:11:37.450 fused_ordering(30) 00:11:37.450 fused_ordering(31) 00:11:37.450 fused_ordering(32) 00:11:37.450 fused_ordering(33) 00:11:37.450 fused_ordering(34) 00:11:37.450 fused_ordering(35) 00:11:37.450 fused_ordering(36) 00:11:37.450 fused_ordering(37) 00:11:37.450 fused_ordering(38) 00:11:37.450 fused_ordering(39) 00:11:37.450 fused_ordering(40) 00:11:37.450 fused_ordering(41) 00:11:37.450 fused_ordering(42) 00:11:37.450 fused_ordering(43) 00:11:37.450 fused_ordering(44) 00:11:37.450 fused_ordering(45) 00:11:37.450 fused_ordering(46) 00:11:37.450 fused_ordering(47) 00:11:37.450 fused_ordering(48) 00:11:37.450 fused_ordering(49) 00:11:37.450 fused_ordering(50) 00:11:37.450 fused_ordering(51) 00:11:37.450 fused_ordering(52) 00:11:37.450 fused_ordering(53) 00:11:37.450 fused_ordering(54) 00:11:37.450 fused_ordering(55) 00:11:37.450 fused_ordering(56) 00:11:37.450 fused_ordering(57) 00:11:37.450 fused_ordering(58) 00:11:37.450 fused_ordering(59) 00:11:37.450 fused_ordering(60) 00:11:37.450 fused_ordering(61) 00:11:37.450 fused_ordering(62) 00:11:37.450 fused_ordering(63) 00:11:37.450 fused_ordering(64) 00:11:37.450 fused_ordering(65) 00:11:37.450 fused_ordering(66) 00:11:37.450 fused_ordering(67) 00:11:37.450 fused_ordering(68) 00:11:37.450 fused_ordering(69) 00:11:37.450 fused_ordering(70) 00:11:37.450 fused_ordering(71) 00:11:37.450 fused_ordering(72) 00:11:37.450 fused_ordering(73) 00:11:37.450 fused_ordering(74) 00:11:37.450 fused_ordering(75) 00:11:37.450 fused_ordering(76) 00:11:37.450 fused_ordering(77) 00:11:37.450 fused_ordering(78) 00:11:37.450 fused_ordering(79) 00:11:37.451 fused_ordering(80) 00:11:37.451 fused_ordering(81) 00:11:37.451 fused_ordering(82) 00:11:37.451 fused_ordering(83) 00:11:37.451 fused_ordering(84) 00:11:37.451 fused_ordering(85) 00:11:37.451 fused_ordering(86) 00:11:37.451 fused_ordering(87) 00:11:37.451 fused_ordering(88) 00:11:37.451 fused_ordering(89) 00:11:37.451 fused_ordering(90) 00:11:37.451 fused_ordering(91) 00:11:37.451 fused_ordering(92) 00:11:37.451 fused_ordering(93) 00:11:37.451 fused_ordering(94) 00:11:37.451 fused_ordering(95) 00:11:37.451 fused_ordering(96) 00:11:37.451 fused_ordering(97) 00:11:37.451 fused_ordering(98) 00:11:37.451 fused_ordering(99) 00:11:37.451 fused_ordering(100) 00:11:37.451 fused_ordering(101) 00:11:37.451 fused_ordering(102) 00:11:37.451 fused_ordering(103) 00:11:37.451 fused_ordering(104) 00:11:37.451 fused_ordering(105) 00:11:37.451 fused_ordering(106) 00:11:37.451 fused_ordering(107) 00:11:37.451 fused_ordering(108) 00:11:37.451 fused_ordering(109) 00:11:37.451 fused_ordering(110) 00:11:37.451 fused_ordering(111) 00:11:37.451 fused_ordering(112) 00:11:37.451 fused_ordering(113) 00:11:37.451 fused_ordering(114) 00:11:37.451 fused_ordering(115) 00:11:37.451 fused_ordering(116) 00:11:37.451 fused_ordering(117) 00:11:37.451 fused_ordering(118) 00:11:37.451 fused_ordering(119) 00:11:37.451 fused_ordering(120) 00:11:37.451 fused_ordering(121) 00:11:37.451 fused_ordering(122) 00:11:37.451 fused_ordering(123) 00:11:37.451 fused_ordering(124) 00:11:37.451 fused_ordering(125) 00:11:37.451 fused_ordering(126) 00:11:37.451 fused_ordering(127) 00:11:37.451 fused_ordering(128) 00:11:37.451 fused_ordering(129) 00:11:37.451 fused_ordering(130) 00:11:37.451 fused_ordering(131) 00:11:37.451 fused_ordering(132) 00:11:37.451 fused_ordering(133) 00:11:37.451 fused_ordering(134) 00:11:37.451 fused_ordering(135) 00:11:37.451 fused_ordering(136) 00:11:37.451 fused_ordering(137) 00:11:37.451 fused_ordering(138) 00:11:37.451 fused_ordering(139) 00:11:37.451 fused_ordering(140) 00:11:37.451 fused_ordering(141) 00:11:37.451 fused_ordering(142) 00:11:37.451 fused_ordering(143) 00:11:37.451 fused_ordering(144) 00:11:37.451 fused_ordering(145) 00:11:37.451 fused_ordering(146) 00:11:37.451 fused_ordering(147) 00:11:37.451 fused_ordering(148) 00:11:37.451 fused_ordering(149) 00:11:37.451 fused_ordering(150) 00:11:37.451 fused_ordering(151) 00:11:37.451 fused_ordering(152) 00:11:37.451 fused_ordering(153) 00:11:37.451 fused_ordering(154) 00:11:37.451 fused_ordering(155) 00:11:37.451 fused_ordering(156) 00:11:37.451 fused_ordering(157) 00:11:37.451 fused_ordering(158) 00:11:37.451 fused_ordering(159) 00:11:37.451 fused_ordering(160) 00:11:37.451 fused_ordering(161) 00:11:37.451 fused_ordering(162) 00:11:37.451 fused_ordering(163) 00:11:37.451 fused_ordering(164) 00:11:37.451 fused_ordering(165) 00:11:37.451 fused_ordering(166) 00:11:37.451 fused_ordering(167) 00:11:37.451 fused_ordering(168) 00:11:37.451 fused_ordering(169) 00:11:37.451 fused_ordering(170) 00:11:37.451 fused_ordering(171) 00:11:37.451 fused_ordering(172) 00:11:37.451 fused_ordering(173) 00:11:37.451 fused_ordering(174) 00:11:37.451 fused_ordering(175) 00:11:37.451 fused_ordering(176) 00:11:37.451 fused_ordering(177) 00:11:37.451 fused_ordering(178) 00:11:37.451 fused_ordering(179) 00:11:37.451 fused_ordering(180) 00:11:37.451 fused_ordering(181) 00:11:37.451 fused_ordering(182) 00:11:37.451 fused_ordering(183) 00:11:37.451 fused_ordering(184) 00:11:37.451 fused_ordering(185) 00:11:37.451 fused_ordering(186) 00:11:37.451 fused_ordering(187) 00:11:37.451 fused_ordering(188) 00:11:37.451 fused_ordering(189) 00:11:37.451 fused_ordering(190) 00:11:37.451 fused_ordering(191) 00:11:37.451 fused_ordering(192) 00:11:37.451 fused_ordering(193) 00:11:37.451 fused_ordering(194) 00:11:37.451 fused_ordering(195) 00:11:37.451 fused_ordering(196) 00:11:37.451 fused_ordering(197) 00:11:37.451 fused_ordering(198) 00:11:37.451 fused_ordering(199) 00:11:37.451 fused_ordering(200) 00:11:37.451 fused_ordering(201) 00:11:37.451 fused_ordering(202) 00:11:37.451 fused_ordering(203) 00:11:37.451 fused_ordering(204) 00:11:37.451 fused_ordering(205) 00:11:38.017 fused_ordering(206) 00:11:38.017 fused_ordering(207) 00:11:38.017 fused_ordering(208) 00:11:38.017 fused_ordering(209) 00:11:38.017 fused_ordering(210) 00:11:38.017 fused_ordering(211) 00:11:38.017 fused_ordering(212) 00:11:38.017 fused_ordering(213) 00:11:38.017 fused_ordering(214) 00:11:38.018 fused_ordering(215) 00:11:38.018 fused_ordering(216) 00:11:38.018 fused_ordering(217) 00:11:38.018 fused_ordering(218) 00:11:38.018 fused_ordering(219) 00:11:38.018 fused_ordering(220) 00:11:38.018 fused_ordering(221) 00:11:38.018 fused_ordering(222) 00:11:38.018 fused_ordering(223) 00:11:38.018 fused_ordering(224) 00:11:38.018 fused_ordering(225) 00:11:38.018 fused_ordering(226) 00:11:38.018 fused_ordering(227) 00:11:38.018 fused_ordering(228) 00:11:38.018 fused_ordering(229) 00:11:38.018 fused_ordering(230) 00:11:38.018 fused_ordering(231) 00:11:38.018 fused_ordering(232) 00:11:38.018 fused_ordering(233) 00:11:38.018 fused_ordering(234) 00:11:38.018 fused_ordering(235) 00:11:38.018 fused_ordering(236) 00:11:38.018 fused_ordering(237) 00:11:38.018 fused_ordering(238) 00:11:38.018 fused_ordering(239) 00:11:38.018 fused_ordering(240) 00:11:38.018 fused_ordering(241) 00:11:38.018 fused_ordering(242) 00:11:38.018 fused_ordering(243) 00:11:38.018 fused_ordering(244) 00:11:38.018 fused_ordering(245) 00:11:38.018 fused_ordering(246) 00:11:38.018 fused_ordering(247) 00:11:38.018 fused_ordering(248) 00:11:38.018 fused_ordering(249) 00:11:38.018 fused_ordering(250) 00:11:38.018 fused_ordering(251) 00:11:38.018 fused_ordering(252) 00:11:38.018 fused_ordering(253) 00:11:38.018 fused_ordering(254) 00:11:38.018 fused_ordering(255) 00:11:38.018 fused_ordering(256) 00:11:38.018 fused_ordering(257) 00:11:38.018 fused_ordering(258) 00:11:38.018 fused_ordering(259) 00:11:38.018 fused_ordering(260) 00:11:38.018 fused_ordering(261) 00:11:38.018 fused_ordering(262) 00:11:38.018 fused_ordering(263) 00:11:38.018 fused_ordering(264) 00:11:38.018 fused_ordering(265) 00:11:38.018 fused_ordering(266) 00:11:38.018 fused_ordering(267) 00:11:38.018 fused_ordering(268) 00:11:38.018 fused_ordering(269) 00:11:38.018 fused_ordering(270) 00:11:38.018 fused_ordering(271) 00:11:38.018 fused_ordering(272) 00:11:38.018 fused_ordering(273) 00:11:38.018 fused_ordering(274) 00:11:38.018 fused_ordering(275) 00:11:38.018 fused_ordering(276) 00:11:38.018 fused_ordering(277) 00:11:38.018 fused_ordering(278) 00:11:38.018 fused_ordering(279) 00:11:38.018 fused_ordering(280) 00:11:38.018 fused_ordering(281) 00:11:38.018 fused_ordering(282) 00:11:38.018 fused_ordering(283) 00:11:38.018 fused_ordering(284) 00:11:38.018 fused_ordering(285) 00:11:38.018 fused_ordering(286) 00:11:38.018 fused_ordering(287) 00:11:38.018 fused_ordering(288) 00:11:38.018 fused_ordering(289) 00:11:38.018 fused_ordering(290) 00:11:38.018 fused_ordering(291) 00:11:38.018 fused_ordering(292) 00:11:38.018 fused_ordering(293) 00:11:38.018 fused_ordering(294) 00:11:38.018 fused_ordering(295) 00:11:38.018 fused_ordering(296) 00:11:38.018 fused_ordering(297) 00:11:38.018 fused_ordering(298) 00:11:38.018 fused_ordering(299) 00:11:38.018 fused_ordering(300) 00:11:38.018 fused_ordering(301) 00:11:38.018 fused_ordering(302) 00:11:38.018 fused_ordering(303) 00:11:38.018 fused_ordering(304) 00:11:38.018 fused_ordering(305) 00:11:38.018 fused_ordering(306) 00:11:38.018 fused_ordering(307) 00:11:38.018 fused_ordering(308) 00:11:38.018 fused_ordering(309) 00:11:38.018 fused_ordering(310) 00:11:38.018 fused_ordering(311) 00:11:38.018 fused_ordering(312) 00:11:38.018 fused_ordering(313) 00:11:38.018 fused_ordering(314) 00:11:38.018 fused_ordering(315) 00:11:38.018 fused_ordering(316) 00:11:38.018 fused_ordering(317) 00:11:38.018 fused_ordering(318) 00:11:38.018 fused_ordering(319) 00:11:38.018 fused_ordering(320) 00:11:38.018 fused_ordering(321) 00:11:38.018 fused_ordering(322) 00:11:38.018 fused_ordering(323) 00:11:38.018 fused_ordering(324) 00:11:38.018 fused_ordering(325) 00:11:38.018 fused_ordering(326) 00:11:38.018 fused_ordering(327) 00:11:38.018 fused_ordering(328) 00:11:38.018 fused_ordering(329) 00:11:38.018 fused_ordering(330) 00:11:38.018 fused_ordering(331) 00:11:38.018 fused_ordering(332) 00:11:38.018 fused_ordering(333) 00:11:38.018 fused_ordering(334) 00:11:38.018 fused_ordering(335) 00:11:38.018 fused_ordering(336) 00:11:38.018 fused_ordering(337) 00:11:38.018 fused_ordering(338) 00:11:38.018 fused_ordering(339) 00:11:38.018 fused_ordering(340) 00:11:38.018 fused_ordering(341) 00:11:38.018 fused_ordering(342) 00:11:38.018 fused_ordering(343) 00:11:38.018 fused_ordering(344) 00:11:38.018 fused_ordering(345) 00:11:38.018 fused_ordering(346) 00:11:38.018 fused_ordering(347) 00:11:38.018 fused_ordering(348) 00:11:38.018 fused_ordering(349) 00:11:38.018 fused_ordering(350) 00:11:38.018 fused_ordering(351) 00:11:38.018 fused_ordering(352) 00:11:38.018 fused_ordering(353) 00:11:38.018 fused_ordering(354) 00:11:38.018 fused_ordering(355) 00:11:38.018 fused_ordering(356) 00:11:38.018 fused_ordering(357) 00:11:38.018 fused_ordering(358) 00:11:38.018 fused_ordering(359) 00:11:38.018 fused_ordering(360) 00:11:38.018 fused_ordering(361) 00:11:38.018 fused_ordering(362) 00:11:38.018 fused_ordering(363) 00:11:38.018 fused_ordering(364) 00:11:38.018 fused_ordering(365) 00:11:38.018 fused_ordering(366) 00:11:38.018 fused_ordering(367) 00:11:38.018 fused_ordering(368) 00:11:38.018 fused_ordering(369) 00:11:38.018 fused_ordering(370) 00:11:38.018 fused_ordering(371) 00:11:38.018 fused_ordering(372) 00:11:38.018 fused_ordering(373) 00:11:38.018 fused_ordering(374) 00:11:38.018 fused_ordering(375) 00:11:38.018 fused_ordering(376) 00:11:38.018 fused_ordering(377) 00:11:38.018 fused_ordering(378) 00:11:38.018 fused_ordering(379) 00:11:38.018 fused_ordering(380) 00:11:38.018 fused_ordering(381) 00:11:38.018 fused_ordering(382) 00:11:38.018 fused_ordering(383) 00:11:38.018 fused_ordering(384) 00:11:38.018 fused_ordering(385) 00:11:38.018 fused_ordering(386) 00:11:38.018 fused_ordering(387) 00:11:38.018 fused_ordering(388) 00:11:38.018 fused_ordering(389) 00:11:38.018 fused_ordering(390) 00:11:38.018 fused_ordering(391) 00:11:38.018 fused_ordering(392) 00:11:38.018 fused_ordering(393) 00:11:38.018 fused_ordering(394) 00:11:38.018 fused_ordering(395) 00:11:38.018 fused_ordering(396) 00:11:38.018 fused_ordering(397) 00:11:38.018 fused_ordering(398) 00:11:38.018 fused_ordering(399) 00:11:38.018 fused_ordering(400) 00:11:38.018 fused_ordering(401) 00:11:38.018 fused_ordering(402) 00:11:38.018 fused_ordering(403) 00:11:38.018 fused_ordering(404) 00:11:38.018 fused_ordering(405) 00:11:38.018 fused_ordering(406) 00:11:38.018 fused_ordering(407) 00:11:38.018 fused_ordering(408) 00:11:38.018 fused_ordering(409) 00:11:38.018 fused_ordering(410) 00:11:38.276 fused_ordering(411) 00:11:38.276 fused_ordering(412) 00:11:38.277 fused_ordering(413) 00:11:38.277 fused_ordering(414) 00:11:38.277 fused_ordering(415) 00:11:38.277 fused_ordering(416) 00:11:38.277 fused_ordering(417) 00:11:38.277 fused_ordering(418) 00:11:38.277 fused_ordering(419) 00:11:38.277 fused_ordering(420) 00:11:38.277 fused_ordering(421) 00:11:38.277 fused_ordering(422) 00:11:38.277 fused_ordering(423) 00:11:38.277 fused_ordering(424) 00:11:38.277 fused_ordering(425) 00:11:38.277 fused_ordering(426) 00:11:38.277 fused_ordering(427) 00:11:38.277 fused_ordering(428) 00:11:38.277 fused_ordering(429) 00:11:38.277 fused_ordering(430) 00:11:38.277 fused_ordering(431) 00:11:38.277 fused_ordering(432) 00:11:38.277 fused_ordering(433) 00:11:38.277 fused_ordering(434) 00:11:38.277 fused_ordering(435) 00:11:38.277 fused_ordering(436) 00:11:38.277 fused_ordering(437) 00:11:38.277 fused_ordering(438) 00:11:38.277 fused_ordering(439) 00:11:38.277 fused_ordering(440) 00:11:38.277 fused_ordering(441) 00:11:38.277 fused_ordering(442) 00:11:38.277 fused_ordering(443) 00:11:38.277 fused_ordering(444) 00:11:38.277 fused_ordering(445) 00:11:38.277 fused_ordering(446) 00:11:38.277 fused_ordering(447) 00:11:38.277 fused_ordering(448) 00:11:38.277 fused_ordering(449) 00:11:38.277 fused_ordering(450) 00:11:38.277 fused_ordering(451) 00:11:38.277 fused_ordering(452) 00:11:38.277 fused_ordering(453) 00:11:38.277 fused_ordering(454) 00:11:38.277 fused_ordering(455) 00:11:38.277 fused_ordering(456) 00:11:38.277 fused_ordering(457) 00:11:38.277 fused_ordering(458) 00:11:38.277 fused_ordering(459) 00:11:38.277 fused_ordering(460) 00:11:38.277 fused_ordering(461) 00:11:38.277 fused_ordering(462) 00:11:38.277 fused_ordering(463) 00:11:38.277 fused_ordering(464) 00:11:38.277 fused_ordering(465) 00:11:38.277 fused_ordering(466) 00:11:38.277 fused_ordering(467) 00:11:38.277 fused_ordering(468) 00:11:38.277 fused_ordering(469) 00:11:38.277 fused_ordering(470) 00:11:38.277 fused_ordering(471) 00:11:38.277 fused_ordering(472) 00:11:38.277 fused_ordering(473) 00:11:38.277 fused_ordering(474) 00:11:38.277 fused_ordering(475) 00:11:38.277 fused_ordering(476) 00:11:38.277 fused_ordering(477) 00:11:38.277 fused_ordering(478) 00:11:38.277 fused_ordering(479) 00:11:38.277 fused_ordering(480) 00:11:38.277 fused_ordering(481) 00:11:38.277 fused_ordering(482) 00:11:38.277 fused_ordering(483) 00:11:38.277 fused_ordering(484) 00:11:38.277 fused_ordering(485) 00:11:38.277 fused_ordering(486) 00:11:38.277 fused_ordering(487) 00:11:38.277 fused_ordering(488) 00:11:38.277 fused_ordering(489) 00:11:38.277 fused_ordering(490) 00:11:38.277 fused_ordering(491) 00:11:38.277 fused_ordering(492) 00:11:38.277 fused_ordering(493) 00:11:38.277 fused_ordering(494) 00:11:38.277 fused_ordering(495) 00:11:38.277 fused_ordering(496) 00:11:38.277 fused_ordering(497) 00:11:38.277 fused_ordering(498) 00:11:38.277 fused_ordering(499) 00:11:38.277 fused_ordering(500) 00:11:38.277 fused_ordering(501) 00:11:38.277 fused_ordering(502) 00:11:38.277 fused_ordering(503) 00:11:38.277 fused_ordering(504) 00:11:38.277 fused_ordering(505) 00:11:38.277 fused_ordering(506) 00:11:38.277 fused_ordering(507) 00:11:38.277 fused_ordering(508) 00:11:38.277 fused_ordering(509) 00:11:38.277 fused_ordering(510) 00:11:38.277 fused_ordering(511) 00:11:38.277 fused_ordering(512) 00:11:38.277 fused_ordering(513) 00:11:38.277 fused_ordering(514) 00:11:38.277 fused_ordering(515) 00:11:38.277 fused_ordering(516) 00:11:38.277 fused_ordering(517) 00:11:38.277 fused_ordering(518) 00:11:38.277 fused_ordering(519) 00:11:38.277 fused_ordering(520) 00:11:38.277 fused_ordering(521) 00:11:38.277 fused_ordering(522) 00:11:38.277 fused_ordering(523) 00:11:38.277 fused_ordering(524) 00:11:38.277 fused_ordering(525) 00:11:38.277 fused_ordering(526) 00:11:38.277 fused_ordering(527) 00:11:38.277 fused_ordering(528) 00:11:38.277 fused_ordering(529) 00:11:38.277 fused_ordering(530) 00:11:38.277 fused_ordering(531) 00:11:38.277 fused_ordering(532) 00:11:38.277 fused_ordering(533) 00:11:38.277 fused_ordering(534) 00:11:38.277 fused_ordering(535) 00:11:38.277 fused_ordering(536) 00:11:38.277 fused_ordering(537) 00:11:38.277 fused_ordering(538) 00:11:38.277 fused_ordering(539) 00:11:38.277 fused_ordering(540) 00:11:38.277 fused_ordering(541) 00:11:38.277 fused_ordering(542) 00:11:38.277 fused_ordering(543) 00:11:38.277 fused_ordering(544) 00:11:38.277 fused_ordering(545) 00:11:38.277 fused_ordering(546) 00:11:38.277 fused_ordering(547) 00:11:38.277 fused_ordering(548) 00:11:38.277 fused_ordering(549) 00:11:38.277 fused_ordering(550) 00:11:38.277 fused_ordering(551) 00:11:38.277 fused_ordering(552) 00:11:38.277 fused_ordering(553) 00:11:38.277 fused_ordering(554) 00:11:38.277 fused_ordering(555) 00:11:38.277 fused_ordering(556) 00:11:38.277 fused_ordering(557) 00:11:38.277 fused_ordering(558) 00:11:38.277 fused_ordering(559) 00:11:38.277 fused_ordering(560) 00:11:38.277 fused_ordering(561) 00:11:38.277 fused_ordering(562) 00:11:38.277 fused_ordering(563) 00:11:38.277 fused_ordering(564) 00:11:38.277 fused_ordering(565) 00:11:38.277 fused_ordering(566) 00:11:38.277 fused_ordering(567) 00:11:38.277 fused_ordering(568) 00:11:38.277 fused_ordering(569) 00:11:38.277 fused_ordering(570) 00:11:38.277 fused_ordering(571) 00:11:38.277 fused_ordering(572) 00:11:38.277 fused_ordering(573) 00:11:38.277 fused_ordering(574) 00:11:38.277 fused_ordering(575) 00:11:38.277 fused_ordering(576) 00:11:38.277 fused_ordering(577) 00:11:38.277 fused_ordering(578) 00:11:38.277 fused_ordering(579) 00:11:38.277 fused_ordering(580) 00:11:38.277 fused_ordering(581) 00:11:38.277 fused_ordering(582) 00:11:38.277 fused_ordering(583) 00:11:38.277 fused_ordering(584) 00:11:38.277 fused_ordering(585) 00:11:38.277 fused_ordering(586) 00:11:38.277 fused_ordering(587) 00:11:38.277 fused_ordering(588) 00:11:38.277 fused_ordering(589) 00:11:38.277 fused_ordering(590) 00:11:38.277 fused_ordering(591) 00:11:38.277 fused_ordering(592) 00:11:38.277 fused_ordering(593) 00:11:38.277 fused_ordering(594) 00:11:38.277 fused_ordering(595) 00:11:38.277 fused_ordering(596) 00:11:38.277 fused_ordering(597) 00:11:38.277 fused_ordering(598) 00:11:38.277 fused_ordering(599) 00:11:38.277 fused_ordering(600) 00:11:38.277 fused_ordering(601) 00:11:38.277 fused_ordering(602) 00:11:38.277 fused_ordering(603) 00:11:38.277 fused_ordering(604) 00:11:38.277 fused_ordering(605) 00:11:38.277 fused_ordering(606) 00:11:38.277 fused_ordering(607) 00:11:38.277 fused_ordering(608) 00:11:38.277 fused_ordering(609) 00:11:38.277 fused_ordering(610) 00:11:38.277 fused_ordering(611) 00:11:38.277 fused_ordering(612) 00:11:38.277 fused_ordering(613) 00:11:38.277 fused_ordering(614) 00:11:38.277 fused_ordering(615) 00:11:38.843 fused_ordering(616) 00:11:38.843 fused_ordering(617) 00:11:38.843 fused_ordering(618) 00:11:38.843 fused_ordering(619) 00:11:38.843 fused_ordering(620) 00:11:38.843 fused_ordering(621) 00:11:38.843 fused_ordering(622) 00:11:38.843 fused_ordering(623) 00:11:38.843 fused_ordering(624) 00:11:38.843 fused_ordering(625) 00:11:38.843 fused_ordering(626) 00:11:38.843 fused_ordering(627) 00:11:38.843 fused_ordering(628) 00:11:38.843 fused_ordering(629) 00:11:38.843 fused_ordering(630) 00:11:38.843 fused_ordering(631) 00:11:38.843 fused_ordering(632) 00:11:38.843 fused_ordering(633) 00:11:38.843 fused_ordering(634) 00:11:38.843 fused_ordering(635) 00:11:38.843 fused_ordering(636) 00:11:38.843 fused_ordering(637) 00:11:38.843 fused_ordering(638) 00:11:38.843 fused_ordering(639) 00:11:38.843 fused_ordering(640) 00:11:38.843 fused_ordering(641) 00:11:38.843 fused_ordering(642) 00:11:38.843 fused_ordering(643) 00:11:38.843 fused_ordering(644) 00:11:38.843 fused_ordering(645) 00:11:38.843 fused_ordering(646) 00:11:38.843 fused_ordering(647) 00:11:38.843 fused_ordering(648) 00:11:38.843 fused_ordering(649) 00:11:38.843 fused_ordering(650) 00:11:38.843 fused_ordering(651) 00:11:38.843 fused_ordering(652) 00:11:38.843 fused_ordering(653) 00:11:38.843 fused_ordering(654) 00:11:38.843 fused_ordering(655) 00:11:38.843 fused_ordering(656) 00:11:38.843 fused_ordering(657) 00:11:38.843 fused_ordering(658) 00:11:38.843 fused_ordering(659) 00:11:38.843 fused_ordering(660) 00:11:38.843 fused_ordering(661) 00:11:38.843 fused_ordering(662) 00:11:38.843 fused_ordering(663) 00:11:38.843 fused_ordering(664) 00:11:38.843 fused_ordering(665) 00:11:38.843 fused_ordering(666) 00:11:38.843 fused_ordering(667) 00:11:38.843 fused_ordering(668) 00:11:38.843 fused_ordering(669) 00:11:38.843 fused_ordering(670) 00:11:38.843 fused_ordering(671) 00:11:38.843 fused_ordering(672) 00:11:38.843 fused_ordering(673) 00:11:38.843 fused_ordering(674) 00:11:38.843 fused_ordering(675) 00:11:38.843 fused_ordering(676) 00:11:38.843 fused_ordering(677) 00:11:38.843 fused_ordering(678) 00:11:38.843 fused_ordering(679) 00:11:38.843 fused_ordering(680) 00:11:38.843 fused_ordering(681) 00:11:38.843 fused_ordering(682) 00:11:38.843 fused_ordering(683) 00:11:38.843 fused_ordering(684) 00:11:38.843 fused_ordering(685) 00:11:38.843 fused_ordering(686) 00:11:38.843 fused_ordering(687) 00:11:38.844 fused_ordering(688) 00:11:38.844 fused_ordering(689) 00:11:38.844 fused_ordering(690) 00:11:38.844 fused_ordering(691) 00:11:38.844 fused_ordering(692) 00:11:38.844 fused_ordering(693) 00:11:38.844 fused_ordering(694) 00:11:38.844 fused_ordering(695) 00:11:38.844 fused_ordering(696) 00:11:38.844 fused_ordering(697) 00:11:38.844 fused_ordering(698) 00:11:38.844 fused_ordering(699) 00:11:38.844 fused_ordering(700) 00:11:38.844 fused_ordering(701) 00:11:38.844 fused_ordering(702) 00:11:38.844 fused_ordering(703) 00:11:38.844 fused_ordering(704) 00:11:38.844 fused_ordering(705) 00:11:38.844 fused_ordering(706) 00:11:38.844 fused_ordering(707) 00:11:38.844 fused_ordering(708) 00:11:38.844 fused_ordering(709) 00:11:38.844 fused_ordering(710) 00:11:38.844 fused_ordering(711) 00:11:38.844 fused_ordering(712) 00:11:38.844 fused_ordering(713) 00:11:38.844 fused_ordering(714) 00:11:38.844 fused_ordering(715) 00:11:38.844 fused_ordering(716) 00:11:38.844 fused_ordering(717) 00:11:38.844 fused_ordering(718) 00:11:38.844 fused_ordering(719) 00:11:38.844 fused_ordering(720) 00:11:38.844 fused_ordering(721) 00:11:38.844 fused_ordering(722) 00:11:38.844 fused_ordering(723) 00:11:38.844 fused_ordering(724) 00:11:38.844 fused_ordering(725) 00:11:38.844 fused_ordering(726) 00:11:38.844 fused_ordering(727) 00:11:38.844 fused_ordering(728) 00:11:38.844 fused_ordering(729) 00:11:38.844 fused_ordering(730) 00:11:38.844 fused_ordering(731) 00:11:38.844 fused_ordering(732) 00:11:38.844 fused_ordering(733) 00:11:38.844 fused_ordering(734) 00:11:38.844 fused_ordering(735) 00:11:38.844 fused_ordering(736) 00:11:38.844 fused_ordering(737) 00:11:38.844 fused_ordering(738) 00:11:38.844 fused_ordering(739) 00:11:38.844 fused_ordering(740) 00:11:38.844 fused_ordering(741) 00:11:38.844 fused_ordering(742) 00:11:38.844 fused_ordering(743) 00:11:38.844 fused_ordering(744) 00:11:38.844 fused_ordering(745) 00:11:38.844 fused_ordering(746) 00:11:38.844 fused_ordering(747) 00:11:38.844 fused_ordering(748) 00:11:38.844 fused_ordering(749) 00:11:38.844 fused_ordering(750) 00:11:38.844 fused_ordering(751) 00:11:38.844 fused_ordering(752) 00:11:38.844 fused_ordering(753) 00:11:38.844 fused_ordering(754) 00:11:38.844 fused_ordering(755) 00:11:38.844 fused_ordering(756) 00:11:38.844 fused_ordering(757) 00:11:38.844 fused_ordering(758) 00:11:38.844 fused_ordering(759) 00:11:38.844 fused_ordering(760) 00:11:38.844 fused_ordering(761) 00:11:38.844 fused_ordering(762) 00:11:38.844 fused_ordering(763) 00:11:38.844 fused_ordering(764) 00:11:38.844 fused_ordering(765) 00:11:38.844 fused_ordering(766) 00:11:38.844 fused_ordering(767) 00:11:38.844 fused_ordering(768) 00:11:38.844 fused_ordering(769) 00:11:38.844 fused_ordering(770) 00:11:38.844 fused_ordering(771) 00:11:38.844 fused_ordering(772) 00:11:38.844 fused_ordering(773) 00:11:38.844 fused_ordering(774) 00:11:38.844 fused_ordering(775) 00:11:38.844 fused_ordering(776) 00:11:38.844 fused_ordering(777) 00:11:38.844 fused_ordering(778) 00:11:38.844 fused_ordering(779) 00:11:38.844 fused_ordering(780) 00:11:38.844 fused_ordering(781) 00:11:38.844 fused_ordering(782) 00:11:38.844 fused_ordering(783) 00:11:38.844 fused_ordering(784) 00:11:38.844 fused_ordering(785) 00:11:38.844 fused_ordering(786) 00:11:38.844 fused_ordering(787) 00:11:38.844 fused_ordering(788) 00:11:38.844 fused_ordering(789) 00:11:38.844 fused_ordering(790) 00:11:38.844 fused_ordering(791) 00:11:38.844 fused_ordering(792) 00:11:38.844 fused_ordering(793) 00:11:38.844 fused_ordering(794) 00:11:38.844 fused_ordering(795) 00:11:38.844 fused_ordering(796) 00:11:38.844 fused_ordering(797) 00:11:38.844 fused_ordering(798) 00:11:38.844 fused_ordering(799) 00:11:38.844 fused_ordering(800) 00:11:38.844 fused_ordering(801) 00:11:38.844 fused_ordering(802) 00:11:38.844 fused_ordering(803) 00:11:38.844 fused_ordering(804) 00:11:38.844 fused_ordering(805) 00:11:38.844 fused_ordering(806) 00:11:38.844 fused_ordering(807) 00:11:38.844 fused_ordering(808) 00:11:38.844 fused_ordering(809) 00:11:38.844 fused_ordering(810) 00:11:38.844 fused_ordering(811) 00:11:38.844 fused_ordering(812) 00:11:38.844 fused_ordering(813) 00:11:38.844 fused_ordering(814) 00:11:38.844 fused_ordering(815) 00:11:38.844 fused_ordering(816) 00:11:38.844 fused_ordering(817) 00:11:38.844 fused_ordering(818) 00:11:38.844 fused_ordering(819) 00:11:38.844 fused_ordering(820) 00:11:39.411 fused_ordering(821) 00:11:39.411 fused_ordering(822) 00:11:39.411 fused_ordering(823) 00:11:39.411 fused_ordering(824) 00:11:39.411 fused_ordering(825) 00:11:39.411 fused_ordering(826) 00:11:39.411 fused_ordering(827) 00:11:39.411 fused_ordering(828) 00:11:39.411 fused_ordering(829) 00:11:39.411 fused_ordering(830) 00:11:39.411 fused_ordering(831) 00:11:39.411 fused_ordering(832) 00:11:39.411 fused_ordering(833) 00:11:39.411 fused_ordering(834) 00:11:39.411 fused_ordering(835) 00:11:39.411 fused_ordering(836) 00:11:39.411 fused_ordering(837) 00:11:39.411 fused_ordering(838) 00:11:39.411 fused_ordering(839) 00:11:39.411 fused_ordering(840) 00:11:39.411 fused_ordering(841) 00:11:39.411 fused_ordering(842) 00:11:39.411 fused_ordering(843) 00:11:39.411 fused_ordering(844) 00:11:39.411 fused_ordering(845) 00:11:39.411 fused_ordering(846) 00:11:39.411 fused_ordering(847) 00:11:39.411 fused_ordering(848) 00:11:39.411 fused_ordering(849) 00:11:39.411 fused_ordering(850) 00:11:39.411 fused_ordering(851) 00:11:39.411 fused_ordering(852) 00:11:39.411 fused_ordering(853) 00:11:39.411 fused_ordering(854) 00:11:39.411 fused_ordering(855) 00:11:39.411 fused_ordering(856) 00:11:39.411 fused_ordering(857) 00:11:39.411 fused_ordering(858) 00:11:39.411 fused_ordering(859) 00:11:39.411 fused_ordering(860) 00:11:39.411 fused_ordering(861) 00:11:39.411 fused_ordering(862) 00:11:39.411 fused_ordering(863) 00:11:39.411 fused_ordering(864) 00:11:39.411 fused_ordering(865) 00:11:39.411 fused_ordering(866) 00:11:39.411 fused_ordering(867) 00:11:39.411 fused_ordering(868) 00:11:39.411 fused_ordering(869) 00:11:39.411 fused_ordering(870) 00:11:39.411 fused_ordering(871) 00:11:39.411 fused_ordering(872) 00:11:39.411 fused_ordering(873) 00:11:39.411 fused_ordering(874) 00:11:39.411 fused_ordering(875) 00:11:39.411 fused_ordering(876) 00:11:39.411 fused_ordering(877) 00:11:39.411 fused_ordering(878) 00:11:39.411 fused_ordering(879) 00:11:39.411 fused_ordering(880) 00:11:39.411 fused_ordering(881) 00:11:39.411 fused_ordering(882) 00:11:39.411 fused_ordering(883) 00:11:39.411 fused_ordering(884) 00:11:39.411 fused_ordering(885) 00:11:39.411 fused_ordering(886) 00:11:39.411 fused_ordering(887) 00:11:39.411 fused_ordering(888) 00:11:39.411 fused_ordering(889) 00:11:39.411 fused_ordering(890) 00:11:39.411 fused_ordering(891) 00:11:39.411 fused_ordering(892) 00:11:39.411 fused_ordering(893) 00:11:39.411 fused_ordering(894) 00:11:39.411 fused_ordering(895) 00:11:39.411 fused_ordering(896) 00:11:39.411 fused_ordering(897) 00:11:39.411 fused_ordering(898) 00:11:39.411 fused_ordering(899) 00:11:39.411 fused_ordering(900) 00:11:39.411 fused_ordering(901) 00:11:39.411 fused_ordering(902) 00:11:39.411 fused_ordering(903) 00:11:39.411 fused_ordering(904) 00:11:39.411 fused_ordering(905) 00:11:39.411 fused_ordering(906) 00:11:39.411 fused_ordering(907) 00:11:39.411 fused_ordering(908) 00:11:39.411 fused_ordering(909) 00:11:39.411 fused_ordering(910) 00:11:39.411 fused_ordering(911) 00:11:39.411 fused_ordering(912) 00:11:39.411 fused_ordering(913) 00:11:39.411 fused_ordering(914) 00:11:39.411 fused_ordering(915) 00:11:39.411 fused_ordering(916) 00:11:39.411 fused_ordering(917) 00:11:39.411 fused_ordering(918) 00:11:39.411 fused_ordering(919) 00:11:39.411 fused_ordering(920) 00:11:39.411 fused_ordering(921) 00:11:39.411 fused_ordering(922) 00:11:39.411 fused_ordering(923) 00:11:39.411 fused_ordering(924) 00:11:39.411 fused_ordering(925) 00:11:39.411 fused_ordering(926) 00:11:39.411 fused_ordering(927) 00:11:39.411 fused_ordering(928) 00:11:39.411 fused_ordering(929) 00:11:39.411 fused_ordering(930) 00:11:39.411 fused_ordering(931) 00:11:39.411 fused_ordering(932) 00:11:39.411 fused_ordering(933) 00:11:39.411 fused_ordering(934) 00:11:39.411 fused_ordering(935) 00:11:39.411 fused_ordering(936) 00:11:39.411 fused_ordering(937) 00:11:39.411 fused_ordering(938) 00:11:39.411 fused_ordering(939) 00:11:39.411 fused_ordering(940) 00:11:39.411 fused_ordering(941) 00:11:39.411 fused_ordering(942) 00:11:39.411 fused_ordering(943) 00:11:39.411 fused_ordering(944) 00:11:39.411 fused_ordering(945) 00:11:39.411 fused_ordering(946) 00:11:39.411 fused_ordering(947) 00:11:39.411 fused_ordering(948) 00:11:39.411 fused_ordering(949) 00:11:39.411 fused_ordering(950) 00:11:39.411 fused_ordering(951) 00:11:39.411 fused_ordering(952) 00:11:39.411 fused_ordering(953) 00:11:39.411 fused_ordering(954) 00:11:39.411 fused_ordering(955) 00:11:39.411 fused_ordering(956) 00:11:39.411 fused_ordering(957) 00:11:39.411 fused_ordering(958) 00:11:39.411 fused_ordering(959) 00:11:39.411 fused_ordering(960) 00:11:39.411 fused_ordering(961) 00:11:39.411 fused_ordering(962) 00:11:39.411 fused_ordering(963) 00:11:39.411 fused_ordering(964) 00:11:39.411 fused_ordering(965) 00:11:39.411 fused_ordering(966) 00:11:39.411 fused_ordering(967) 00:11:39.411 fused_ordering(968) 00:11:39.411 fused_ordering(969) 00:11:39.411 fused_ordering(970) 00:11:39.411 fused_ordering(971) 00:11:39.411 fused_ordering(972) 00:11:39.411 fused_ordering(973) 00:11:39.411 fused_ordering(974) 00:11:39.411 fused_ordering(975) 00:11:39.411 fused_ordering(976) 00:11:39.411 fused_ordering(977) 00:11:39.411 fused_ordering(978) 00:11:39.411 fused_ordering(979) 00:11:39.411 fused_ordering(980) 00:11:39.411 fused_ordering(981) 00:11:39.411 fused_ordering(982) 00:11:39.411 fused_ordering(983) 00:11:39.411 fused_ordering(984) 00:11:39.411 fused_ordering(985) 00:11:39.411 fused_ordering(986) 00:11:39.411 fused_ordering(987) 00:11:39.411 fused_ordering(988) 00:11:39.411 fused_ordering(989) 00:11:39.411 fused_ordering(990) 00:11:39.411 fused_ordering(991) 00:11:39.411 fused_ordering(992) 00:11:39.411 fused_ordering(993) 00:11:39.411 fused_ordering(994) 00:11:39.411 fused_ordering(995) 00:11:39.411 fused_ordering(996) 00:11:39.411 fused_ordering(997) 00:11:39.411 fused_ordering(998) 00:11:39.411 fused_ordering(999) 00:11:39.411 fused_ordering(1000) 00:11:39.411 fused_ordering(1001) 00:11:39.411 fused_ordering(1002) 00:11:39.411 fused_ordering(1003) 00:11:39.411 fused_ordering(1004) 00:11:39.411 fused_ordering(1005) 00:11:39.412 fused_ordering(1006) 00:11:39.412 fused_ordering(1007) 00:11:39.412 fused_ordering(1008) 00:11:39.412 fused_ordering(1009) 00:11:39.412 fused_ordering(1010) 00:11:39.412 fused_ordering(1011) 00:11:39.412 fused_ordering(1012) 00:11:39.412 fused_ordering(1013) 00:11:39.412 fused_ordering(1014) 00:11:39.412 fused_ordering(1015) 00:11:39.412 fused_ordering(1016) 00:11:39.412 fused_ordering(1017) 00:11:39.412 fused_ordering(1018) 00:11:39.412 fused_ordering(1019) 00:11:39.412 fused_ordering(1020) 00:11:39.412 fused_ordering(1021) 00:11:39.412 fused_ordering(1022) 00:11:39.412 fused_ordering(1023) 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:39.412 rmmod nvme_tcp 00:11:39.412 rmmod nvme_fabrics 00:11:39.412 rmmod nvme_keyring 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2892714 ']' 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2892714 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2892714 ']' 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2892714 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.412 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2892714 00:11:39.670 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:39.670 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:39.670 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2892714' 00:11:39.670 killing process with pid 2892714 00:11:39.670 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2892714 00:11:39.670 11:30:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2892714 00:11:39.670 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:39.670 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:39.670 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:39.670 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:11:39.670 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:11:39.670 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:39.670 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:11:39.670 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:39.670 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:39.670 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.670 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.670 11:30:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:42.211 00:11:42.211 real 0m7.543s 00:11:42.211 user 0m5.000s 00:11:42.211 sys 0m3.179s 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:42.211 ************************************ 00:11:42.211 END TEST nvmf_fused_ordering 00:11:42.211 ************************************ 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:42.211 ************************************ 00:11:42.211 START TEST nvmf_ns_masking 00:11:42.211 ************************************ 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:42.211 * Looking for test storage... 00:11:42.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:42.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.211 --rc genhtml_branch_coverage=1 00:11:42.211 --rc genhtml_function_coverage=1 00:11:42.211 --rc genhtml_legend=1 00:11:42.211 --rc geninfo_all_blocks=1 00:11:42.211 --rc geninfo_unexecuted_blocks=1 00:11:42.211 00:11:42.211 ' 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:42.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.211 --rc genhtml_branch_coverage=1 00:11:42.211 --rc genhtml_function_coverage=1 00:11:42.211 --rc genhtml_legend=1 00:11:42.211 --rc geninfo_all_blocks=1 00:11:42.211 --rc geninfo_unexecuted_blocks=1 00:11:42.211 00:11:42.211 ' 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:42.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.211 --rc genhtml_branch_coverage=1 00:11:42.211 --rc genhtml_function_coverage=1 00:11:42.211 --rc genhtml_legend=1 00:11:42.211 --rc geninfo_all_blocks=1 00:11:42.211 --rc geninfo_unexecuted_blocks=1 00:11:42.211 00:11:42.211 ' 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:42.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.211 --rc genhtml_branch_coverage=1 00:11:42.211 --rc genhtml_function_coverage=1 00:11:42.211 --rc genhtml_legend=1 00:11:42.211 --rc geninfo_all_blocks=1 00:11:42.211 --rc geninfo_unexecuted_blocks=1 00:11:42.211 00:11:42.211 ' 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.211 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:42.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=49cc06c8-c432-4a50-b166-8bd685a68bb8 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=ec6000ed-072f-4004-b798-4505d17ee308 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=cbeed657-a06f-4fee-93b2-1af5e7370d24 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:11:42.212 11:30:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:44.116 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.116 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:11:44.116 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:44.116 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:44.116 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:44.116 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:44.116 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:44.116 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:11:44.116 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:44.116 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:11:44.116 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:11:44.116 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:11:44.116 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:11:44.116 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:11:44.116 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:11:44.116 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.116 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:44.117 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:44.117 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:44.117 Found net devices under 0000:09:00.0: cvl_0_0 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:44.117 Found net devices under 0000:09:00.1: cvl_0_1 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.117 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:44.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:11:44.375 00:11:44.375 --- 10.0.0.2 ping statistics --- 00:11:44.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.375 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:11:44.375 00:11:44.375 --- 10.0.0.1 ping statistics --- 00:11:44.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.375 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:44.375 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:44.376 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.376 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:44.376 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:44.376 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2895063 00:11:44.376 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:44.376 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2895063 00:11:44.376 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2895063 ']' 00:11:44.376 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.376 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.376 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.376 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.376 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:44.376 [2024-11-15 11:30:24.730006] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:11:44.376 [2024-11-15 11:30:24.730080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.376 [2024-11-15 11:30:24.799861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.634 [2024-11-15 11:30:24.858228] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.634 [2024-11-15 11:30:24.858280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.634 [2024-11-15 11:30:24.858317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.634 [2024-11-15 11:30:24.858330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.634 [2024-11-15 11:30:24.858339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.634 [2024-11-15 11:30:24.858894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.634 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.634 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:44.634 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.634 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:44.634 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:44.634 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.634 11:30:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:44.892 [2024-11-15 11:30:25.298231] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:45.158 11:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:45.158 11:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:45.158 11:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:45.476 Malloc1 00:11:45.476 11:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:45.756 Malloc2 00:11:45.756 11:30:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:46.014 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:46.271 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.528 [2024-11-15 11:30:26.777700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.528 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:46.528 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cbeed657-a06f-4fee-93b2-1af5e7370d24 -a 10.0.0.2 -s 4420 -i 4 00:11:46.528 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:46.528 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:46.528 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.528 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:46.528 11:30:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:49.057 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:49.057 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:49.057 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.057 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:49.057 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.057 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:49.057 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:49.057 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:49.057 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:49.057 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:49.057 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:49.057 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:49.057 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:49.057 [ 0]:0x1 00:11:49.057 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:49.057 11:30:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:49.057 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3efe9220098f4376a866c85d18d90583 00:11:49.057 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3efe9220098f4376a866c85d18d90583 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:49.057 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:49.057 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:49.057 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:49.057 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:49.057 [ 0]:0x1 00:11:49.057 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:49.057 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:49.057 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3efe9220098f4376a866c85d18d90583 00:11:49.057 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3efe9220098f4376a866c85d18d90583 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:49.057 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:49.057 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:49.057 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:49.057 [ 1]:0x2 00:11:49.057 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:49.057 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:49.316 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8f0f234e236f479ca72d2af86da4a30b 00:11:49.316 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8f0f234e236f479ca72d2af86da4a30b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:49.316 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:49.316 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.316 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.573 11:30:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:49.831 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:49.831 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cbeed657-a06f-4fee-93b2-1af5e7370d24 -a 10.0.0.2 -s 4420 -i 4 00:11:50.089 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:50.089 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:50.089 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.089 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:11:50.089 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:11:50.089 11:30:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:51.987 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:51.987 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.988 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:52.246 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:52.246 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:52.246 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:52.246 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:52.246 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:52.246 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:52.246 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:52.246 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:52.246 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:52.246 [ 0]:0x2 00:11:52.246 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:52.246 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:52.246 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8f0f234e236f479ca72d2af86da4a30b 00:11:52.246 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8f0f234e236f479ca72d2af86da4a30b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:52.246 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:52.503 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:52.504 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:52.504 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:52.504 [ 0]:0x1 00:11:52.504 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:52.504 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:52.504 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3efe9220098f4376a866c85d18d90583 00:11:52.504 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3efe9220098f4376a866c85d18d90583 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:52.504 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:52.504 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:52.504 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:52.504 [ 1]:0x2 00:11:52.504 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:52.504 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:52.761 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8f0f234e236f479ca72d2af86da4a30b 00:11:52.761 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8f0f234e236f479ca72d2af86da4a30b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:52.761 11:30:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:53.019 [ 0]:0x2 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8f0f234e236f479ca72d2af86da4a30b 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8f0f234e236f479ca72d2af86da4a30b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.019 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:53.276 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:53.276 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cbeed657-a06f-4fee-93b2-1af5e7370d24 -a 10.0.0.2 -s 4420 -i 4 00:11:53.534 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:53.534 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:53.534 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.534 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:11:53.534 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:11:53.534 11:30:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:56.077 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:56.077 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:56.077 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.077 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:11:56.077 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.077 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:56.077 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:56.077 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:56.077 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:56.077 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:56.077 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:56.077 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.077 11:30:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:56.077 [ 0]:0x1 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3efe9220098f4376a866c85d18d90583 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3efe9220098f4376a866c85d18d90583 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:56.077 [ 1]:0x2 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8f0f234e236f479ca72d2af86da4a30b 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8f0f234e236f479ca72d2af86da4a30b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:56.077 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:56.078 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:56.078 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:56.078 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:56.078 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.078 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:56.078 [ 0]:0x2 00:11:56.078 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:56.078 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.335 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8f0f234e236f479ca72d2af86da4a30b 00:11:56.335 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8f0f234e236f479ca72d2af86da4a30b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.335 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:56.336 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:56.336 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:56.336 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.336 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.336 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.336 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.336 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.336 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.336 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.336 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:56.336 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:56.594 [2024-11-15 11:30:36.848036] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:56.594 request: 00:11:56.594 { 00:11:56.594 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:56.594 "nsid": 2, 00:11:56.594 "host": "nqn.2016-06.io.spdk:host1", 00:11:56.594 "method": "nvmf_ns_remove_host", 00:11:56.594 "req_id": 1 00:11:56.594 } 00:11:56.594 Got JSON-RPC error response 00:11:56.594 response: 00:11:56.594 { 00:11:56.594 "code": -32602, 00:11:56.594 "message": "Invalid parameters" 00:11:56.594 } 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:56.594 [ 0]:0x2 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:56.594 11:30:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.594 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8f0f234e236f479ca72d2af86da4a30b 00:11:56.594 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8f0f234e236f479ca72d2af86da4a30b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.594 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:56.594 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.853 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2896697 00:11:56.853 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:56.853 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.853 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2896697 /var/tmp/host.sock 00:11:56.853 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2896697 ']' 00:11:56.853 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:56.853 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.853 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:56.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:56.853 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.853 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:56.853 [2024-11-15 11:30:37.187709] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:11:56.853 [2024-11-15 11:30:37.187791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896697 ] 00:11:56.853 [2024-11-15 11:30:37.253206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.110 [2024-11-15 11:30:37.312721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.368 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.368 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:57.368 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.626 11:30:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:57.884 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 49cc06c8-c432-4a50-b166-8bd685a68bb8 00:11:57.884 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:57.884 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 49CC06C8C4324A50B1668BD685A68BB8 -i 00:11:58.143 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid ec6000ed-072f-4004-b798-4505d17ee308 00:11:58.143 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:58.143 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g EC6000ED072F4004B7984505D17EE308 -i 00:11:58.401 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:58.658 11:30:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:58.916 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:58.916 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:59.173 nvme0n1 00:11:59.430 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:59.430 11:30:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:59.688 nvme1n2 00:11:59.946 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:59.946 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:59.946 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:59.946 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:59.946 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:00.204 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:00.204 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:00.204 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:00.204 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:00.462 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 49cc06c8-c432-4a50-b166-8bd685a68bb8 == \4\9\c\c\0\6\c\8\-\c\4\3\2\-\4\a\5\0\-\b\1\6\6\-\8\b\d\6\8\5\a\6\8\b\b\8 ]] 00:12:00.462 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:00.462 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:00.462 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:00.720 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ ec6000ed-072f-4004-b798-4505d17ee308 == \e\c\6\0\0\0\e\d\-\0\7\2\f\-\4\0\0\4\-\b\7\9\8\-\4\5\0\5\d\1\7\e\e\3\0\8 ]] 00:12:00.720 11:30:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.978 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:01.235 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 49cc06c8-c432-4a50-b166-8bd685a68bb8 00:12:01.235 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:01.235 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 49CC06C8C4324A50B1668BD685A68BB8 00:12:01.235 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:01.235 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 49CC06C8C4324A50B1668BD685A68BB8 00:12:01.235 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.235 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.235 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.235 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.235 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.235 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.235 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.235 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:01.235 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 49CC06C8C4324A50B1668BD685A68BB8 00:12:01.492 [2024-11-15 11:30:41.750485] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:01.492 [2024-11-15 11:30:41.750524] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:01.492 [2024-11-15 11:30:41.750555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.492 request: 00:12:01.492 { 00:12:01.492 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:01.492 "namespace": { 00:12:01.492 "bdev_name": "invalid", 00:12:01.492 "nsid": 1, 00:12:01.492 "nguid": "49CC06C8C4324A50B1668BD685A68BB8", 00:12:01.492 "no_auto_visible": false 00:12:01.492 }, 00:12:01.492 "method": "nvmf_subsystem_add_ns", 00:12:01.492 "req_id": 1 00:12:01.492 } 00:12:01.492 Got JSON-RPC error response 00:12:01.492 response: 00:12:01.492 { 00:12:01.492 "code": -32602, 00:12:01.492 "message": "Invalid parameters" 00:12:01.492 } 00:12:01.492 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:01.492 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:01.492 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:01.492 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:01.492 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 49cc06c8-c432-4a50-b166-8bd685a68bb8 00:12:01.492 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:01.492 11:30:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 49CC06C8C4324A50B1668BD685A68BB8 -i 00:12:01.750 11:30:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:03.648 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:03.648 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:03.648 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:03.906 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:03.906 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2896697 00:12:03.906 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2896697 ']' 00:12:03.906 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2896697 00:12:04.163 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:04.163 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.163 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2896697 00:12:04.163 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:04.163 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:04.163 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2896697' 00:12:04.163 killing process with pid 2896697 00:12:04.163 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2896697 00:12:04.163 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2896697 00:12:04.420 11:30:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:04.984 rmmod nvme_tcp 00:12:04.984 rmmod nvme_fabrics 00:12:04.984 rmmod nvme_keyring 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2895063 ']' 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2895063 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2895063 ']' 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2895063 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2895063 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2895063' 00:12:04.984 killing process with pid 2895063 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2895063 00:12:04.984 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2895063 00:12:05.244 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:05.244 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:05.244 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:05.244 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:05.244 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:05.244 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:05.244 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:05.244 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:05.244 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:05.244 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.244 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.244 11:30:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.219 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:07.219 00:12:07.219 real 0m25.358s 00:12:07.219 user 0m36.904s 00:12:07.219 sys 0m4.695s 00:12:07.220 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.220 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:07.220 ************************************ 00:12:07.220 END TEST nvmf_ns_masking 00:12:07.220 ************************************ 00:12:07.220 11:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:07.220 11:30:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:07.220 11:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:07.220 11:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.220 11:30:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:07.220 ************************************ 00:12:07.220 START TEST nvmf_nvme_cli 00:12:07.220 ************************************ 00:12:07.220 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:07.478 * Looking for test storage... 00:12:07.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:07.478 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:07.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.479 --rc genhtml_branch_coverage=1 00:12:07.479 --rc genhtml_function_coverage=1 00:12:07.479 --rc genhtml_legend=1 00:12:07.479 --rc geninfo_all_blocks=1 00:12:07.479 --rc geninfo_unexecuted_blocks=1 00:12:07.479 00:12:07.479 ' 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:07.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.479 --rc genhtml_branch_coverage=1 00:12:07.479 --rc genhtml_function_coverage=1 00:12:07.479 --rc genhtml_legend=1 00:12:07.479 --rc geninfo_all_blocks=1 00:12:07.479 --rc geninfo_unexecuted_blocks=1 00:12:07.479 00:12:07.479 ' 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:07.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.479 --rc genhtml_branch_coverage=1 00:12:07.479 --rc genhtml_function_coverage=1 00:12:07.479 --rc genhtml_legend=1 00:12:07.479 --rc geninfo_all_blocks=1 00:12:07.479 --rc geninfo_unexecuted_blocks=1 00:12:07.479 00:12:07.479 ' 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:07.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.479 --rc genhtml_branch_coverage=1 00:12:07.479 --rc genhtml_function_coverage=1 00:12:07.479 --rc genhtml_legend=1 00:12:07.479 --rc geninfo_all_blocks=1 00:12:07.479 --rc geninfo_unexecuted_blocks=1 00:12:07.479 00:12:07.479 ' 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:07.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:07.479 11:30:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:10.015 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:10.015 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:10.015 Found net devices under 0000:09:00.0: cvl_0_0 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:10.015 Found net devices under 0000:09:00.1: cvl_0_1 00:12:10.015 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:10.016 11:30:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:10.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:12:10.016 00:12:10.016 --- 10.0.0.2 ping statistics --- 00:12:10.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.016 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:10.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:12:10.016 00:12:10.016 --- 10.0.0.1 ping statistics --- 00:12:10.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.016 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2899610 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2899610 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2899610 ']' 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.016 [2024-11-15 11:30:50.157264] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:12:10.016 [2024-11-15 11:30:50.157363] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.016 [2024-11-15 11:30:50.231761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.016 [2024-11-15 11:30:50.293750] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.016 [2024-11-15 11:30:50.293802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.016 [2024-11-15 11:30:50.293825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.016 [2024-11-15 11:30:50.293850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.016 [2024-11-15 11:30:50.293859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.016 [2024-11-15 11:30:50.295391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.016 [2024-11-15 11:30:50.295453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.016 [2024-11-15 11:30:50.295503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.016 [2024-11-15 11:30:50.295507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:10.016 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.275 [2024-11-15 11:30:50.444125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.275 Malloc0 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.275 Malloc1 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.275 [2024-11-15 11:30:50.542085] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.275 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:12:10.534 00:12:10.534 Discovery Log Number of Records 2, Generation counter 2 00:12:10.534 =====Discovery Log Entry 0====== 00:12:10.534 trtype: tcp 00:12:10.534 adrfam: ipv4 00:12:10.534 subtype: current discovery subsystem 00:12:10.534 treq: not required 00:12:10.534 portid: 0 00:12:10.534 trsvcid: 4420 00:12:10.534 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:10.534 traddr: 10.0.0.2 00:12:10.534 eflags: explicit discovery connections, duplicate discovery information 00:12:10.534 sectype: none 00:12:10.534 =====Discovery Log Entry 1====== 00:12:10.534 trtype: tcp 00:12:10.534 adrfam: ipv4 00:12:10.534 subtype: nvme subsystem 00:12:10.534 treq: not required 00:12:10.534 portid: 0 00:12:10.534 trsvcid: 4420 00:12:10.534 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:10.534 traddr: 10.0.0.2 00:12:10.534 eflags: none 00:12:10.534 sectype: none 00:12:10.534 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:10.534 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:10.534 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:10.534 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:10.534 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:10.534 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:10.534 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:10.534 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:10.534 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:10.534 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:10.535 11:30:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:11.100 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:11.100 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:12:11.100 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:11.100 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:11.100 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:11.100 11:30:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:12:13.626 /dev/nvme0n2 ]] 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:13.626 rmmod nvme_tcp 00:12:13.626 rmmod nvme_fabrics 00:12:13.626 rmmod nvme_keyring 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2899610 ']' 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2899610 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2899610 ']' 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2899610 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2899610 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.626 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2899610' 00:12:13.626 killing process with pid 2899610 00:12:13.627 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2899610 00:12:13.627 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2899610 00:12:13.627 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:13.627 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:13.627 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:13.627 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:12:13.627 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:12:13.627 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:13.627 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:12:13.627 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.627 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:13.627 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.627 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.627 11:30:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.159 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:16.159 00:12:16.159 real 0m8.411s 00:12:16.159 user 0m15.224s 00:12:16.159 sys 0m2.369s 00:12:16.159 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.159 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:16.159 ************************************ 00:12:16.159 END TEST nvmf_nvme_cli 00:12:16.160 ************************************ 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:16.160 ************************************ 00:12:16.160 START TEST nvmf_vfio_user 00:12:16.160 ************************************ 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:16.160 * Looking for test storage... 00:12:16.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:16.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.160 --rc genhtml_branch_coverage=1 00:12:16.160 --rc genhtml_function_coverage=1 00:12:16.160 --rc genhtml_legend=1 00:12:16.160 --rc geninfo_all_blocks=1 00:12:16.160 --rc geninfo_unexecuted_blocks=1 00:12:16.160 00:12:16.160 ' 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:16.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.160 --rc genhtml_branch_coverage=1 00:12:16.160 --rc genhtml_function_coverage=1 00:12:16.160 --rc genhtml_legend=1 00:12:16.160 --rc geninfo_all_blocks=1 00:12:16.160 --rc geninfo_unexecuted_blocks=1 00:12:16.160 00:12:16.160 ' 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:16.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.160 --rc genhtml_branch_coverage=1 00:12:16.160 --rc genhtml_function_coverage=1 00:12:16.160 --rc genhtml_legend=1 00:12:16.160 --rc geninfo_all_blocks=1 00:12:16.160 --rc geninfo_unexecuted_blocks=1 00:12:16.160 00:12:16.160 ' 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:16.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.160 --rc genhtml_branch_coverage=1 00:12:16.160 --rc genhtml_function_coverage=1 00:12:16.160 --rc genhtml_legend=1 00:12:16.160 --rc geninfo_all_blocks=1 00:12:16.160 --rc geninfo_unexecuted_blocks=1 00:12:16.160 00:12:16.160 ' 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.160 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:16.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2900548 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2900548' 00:12:16.161 Process pid: 2900548 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2900548 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2900548 ']' 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:16.161 [2024-11-15 11:30:56.295579] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:12:16.161 [2024-11-15 11:30:56.295690] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.161 [2024-11-15 11:30:56.366160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.161 [2024-11-15 11:30:56.424395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.161 [2024-11-15 11:30:56.424445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.161 [2024-11-15 11:30:56.424468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.161 [2024-11-15 11:30:56.424479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.161 [2024-11-15 11:30:56.424489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.161 [2024-11-15 11:30:56.426021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.161 [2024-11-15 11:30:56.426085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.161 [2024-11-15 11:30:56.426150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.161 [2024-11-15 11:30:56.426153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:12:16.161 11:30:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:17.533 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:17.533 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:17.533 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:17.533 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:17.533 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:17.533 11:30:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:17.793 Malloc1 00:12:17.793 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:18.050 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:18.307 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:18.564 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:18.564 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:18.564 11:30:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:19.206 Malloc2 00:12:19.206 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:19.206 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:19.464 11:30:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:19.722 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:19.722 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:19.722 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:19.722 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:19.722 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:19.722 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:19.722 [2024-11-15 11:31:00.130917] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:12:19.722 [2024-11-15 11:31:00.130961] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900975 ] 00:12:19.984 [2024-11-15 11:31:00.185389] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:19.984 [2024-11-15 11:31:00.195880] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:19.984 [2024-11-15 11:31:00.195912] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd6e05b5000 00:12:19.984 [2024-11-15 11:31:00.196873] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:19.984 [2024-11-15 11:31:00.197887] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:19.984 [2024-11-15 11:31:00.198874] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:19.984 [2024-11-15 11:31:00.199884] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:19.984 [2024-11-15 11:31:00.200888] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:19.984 [2024-11-15 11:31:00.201895] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:19.984 [2024-11-15 11:31:00.202900] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:19.984 [2024-11-15 11:31:00.203905] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:19.984 [2024-11-15 11:31:00.204913] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:19.984 [2024-11-15 11:31:00.204933] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd6e05aa000 00:12:19.984 [2024-11-15 11:31:00.206079] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:19.984 [2024-11-15 11:31:00.220245] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:19.984 [2024-11-15 11:31:00.220298] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:12:19.984 [2024-11-15 11:31:00.227028] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:19.984 [2024-11-15 11:31:00.227086] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:19.984 [2024-11-15 11:31:00.227175] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:12:19.984 [2024-11-15 11:31:00.227217] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:12:19.984 [2024-11-15 11:31:00.227229] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:12:19.984 [2024-11-15 11:31:00.228025] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:19.984 [2024-11-15 11:31:00.228045] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:12:19.984 [2024-11-15 11:31:00.228057] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:12:19.984 [2024-11-15 11:31:00.229028] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:19.984 [2024-11-15 11:31:00.229048] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:12:19.984 [2024-11-15 11:31:00.229062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:19.984 [2024-11-15 11:31:00.230030] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:19.984 [2024-11-15 11:31:00.230048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:19.984 [2024-11-15 11:31:00.231036] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:19.984 [2024-11-15 11:31:00.231054] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:19.984 [2024-11-15 11:31:00.231063] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:19.984 [2024-11-15 11:31:00.231075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:19.984 [2024-11-15 11:31:00.231188] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:12:19.984 [2024-11-15 11:31:00.231198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:19.984 [2024-11-15 11:31:00.231206] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:19.984 [2024-11-15 11:31:00.232047] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:19.984 [2024-11-15 11:31:00.233062] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:19.984 [2024-11-15 11:31:00.234053] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:19.984 [2024-11-15 11:31:00.235048] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:19.984 [2024-11-15 11:31:00.235193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:19.984 [2024-11-15 11:31:00.236066] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:19.984 [2024-11-15 11:31:00.236085] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:19.984 [2024-11-15 11:31:00.236095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:19.984 [2024-11-15 11:31:00.236119] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:12:19.984 [2024-11-15 11:31:00.236133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:19.984 [2024-11-15 11:31:00.236160] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:19.984 [2024-11-15 11:31:00.236170] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:19.984 [2024-11-15 11:31:00.236177] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:19.984 [2024-11-15 11:31:00.236195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:19.984 [2024-11-15 11:31:00.236271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:19.984 [2024-11-15 11:31:00.236287] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:12:19.984 [2024-11-15 11:31:00.236296] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:12:19.984 [2024-11-15 11:31:00.236311] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:12:19.984 [2024-11-15 11:31:00.236336] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:19.984 [2024-11-15 11:31:00.236349] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:12:19.984 [2024-11-15 11:31:00.236358] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:12:19.984 [2024-11-15 11:31:00.236366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:12:19.984 [2024-11-15 11:31:00.236383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:19.984 [2024-11-15 11:31:00.236404] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:19.984 [2024-11-15 11:31:00.236421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:19.984 [2024-11-15 11:31:00.236438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.984 [2024-11-15 11:31:00.236452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.985 [2024-11-15 11:31:00.236464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.985 [2024-11-15 11:31:00.236477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.985 [2024-11-15 11:31:00.236485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.236497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.236511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:19.985 [2024-11-15 11:31:00.236525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:19.985 [2024-11-15 11:31:00.236540] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:12:19.985 [2024-11-15 11:31:00.236550] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.236561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.236571] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.236600] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:19.985 [2024-11-15 11:31:00.236612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:19.985 [2024-11-15 11:31:00.236692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.236709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.236723] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:19.985 [2024-11-15 11:31:00.236731] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:19.985 [2024-11-15 11:31:00.236737] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:19.985 [2024-11-15 11:31:00.236747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:19.985 [2024-11-15 11:31:00.236763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:19.985 [2024-11-15 11:31:00.236779] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:12:19.985 [2024-11-15 11:31:00.236799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.236817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.236829] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:19.985 [2024-11-15 11:31:00.236837] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:19.985 [2024-11-15 11:31:00.236843] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:19.985 [2024-11-15 11:31:00.236853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:19.985 [2024-11-15 11:31:00.236879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:19.985 [2024-11-15 11:31:00.236901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.236915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.236927] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:19.985 [2024-11-15 11:31:00.236935] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:19.985 [2024-11-15 11:31:00.236941] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:19.985 [2024-11-15 11:31:00.236950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:19.985 [2024-11-15 11:31:00.236964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:19.985 [2024-11-15 11:31:00.236978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.236989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.237002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.237013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.237021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.237029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.237036] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:19.985 [2024-11-15 11:31:00.237044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:12:19.985 [2024-11-15 11:31:00.237067] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:12:19.985 [2024-11-15 11:31:00.237093] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:19.985 [2024-11-15 11:31:00.237111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:19.985 [2024-11-15 11:31:00.237131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:19.985 [2024-11-15 11:31:00.237148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:19.985 [2024-11-15 11:31:00.237165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:19.985 [2024-11-15 11:31:00.237180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:19.985 [2024-11-15 11:31:00.237197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:19.985 [2024-11-15 11:31:00.237209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:19.985 [2024-11-15 11:31:00.237231] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:19.985 [2024-11-15 11:31:00.237242] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:19.985 [2024-11-15 11:31:00.237248] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:19.985 [2024-11-15 11:31:00.237254] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:19.985 [2024-11-15 11:31:00.237260] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:19.985 [2024-11-15 11:31:00.237270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:19.985 [2024-11-15 11:31:00.237296] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:19.985 [2024-11-15 11:31:00.237316] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:19.985 [2024-11-15 11:31:00.237323] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:19.985 [2024-11-15 11:31:00.237333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:19.985 [2024-11-15 11:31:00.237345] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:19.985 [2024-11-15 11:31:00.237354] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:19.985 [2024-11-15 11:31:00.237360] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:19.985 [2024-11-15 11:31:00.237369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:19.985 [2024-11-15 11:31:00.237382] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:19.985 [2024-11-15 11:31:00.237390] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:19.985 [2024-11-15 11:31:00.237412] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:19.985 [2024-11-15 11:31:00.237421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:19.986 [2024-11-15 11:31:00.237433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:19.986 [2024-11-15 11:31:00.237454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:19.986 [2024-11-15 11:31:00.237475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:19.986 [2024-11-15 11:31:00.237488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:19.986 ===================================================== 00:12:19.986 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:19.986 ===================================================== 00:12:19.986 Controller Capabilities/Features 00:12:19.986 ================================ 00:12:19.986 Vendor ID: 4e58 00:12:19.986 Subsystem Vendor ID: 4e58 00:12:19.986 Serial Number: SPDK1 00:12:19.986 Model Number: SPDK bdev Controller 00:12:19.986 Firmware Version: 25.01 00:12:19.986 Recommended Arb Burst: 6 00:12:19.986 IEEE OUI Identifier: 8d 6b 50 00:12:19.986 Multi-path I/O 00:12:19.986 May have multiple subsystem ports: Yes 00:12:19.986 May have multiple controllers: Yes 00:12:19.986 Associated with SR-IOV VF: No 00:12:19.986 Max Data Transfer Size: 131072 00:12:19.986 Max Number of Namespaces: 32 00:12:19.986 Max Number of I/O Queues: 127 00:12:19.986 NVMe Specification Version (VS): 1.3 00:12:19.986 NVMe Specification Version (Identify): 1.3 00:12:19.986 Maximum Queue Entries: 256 00:12:19.986 Contiguous Queues Required: Yes 00:12:19.986 Arbitration Mechanisms Supported 00:12:19.986 Weighted Round Robin: Not Supported 00:12:19.986 Vendor Specific: Not Supported 00:12:19.986 Reset Timeout: 15000 ms 00:12:19.986 Doorbell Stride: 4 bytes 00:12:19.986 NVM Subsystem Reset: Not Supported 00:12:19.986 Command Sets Supported 00:12:19.986 NVM Command Set: Supported 00:12:19.986 Boot Partition: Not Supported 00:12:19.986 Memory Page Size Minimum: 4096 bytes 00:12:19.986 Memory Page Size Maximum: 4096 bytes 00:12:19.986 Persistent Memory Region: Not Supported 00:12:19.986 Optional Asynchronous Events Supported 00:12:19.986 Namespace Attribute Notices: Supported 00:12:19.986 Firmware Activation Notices: Not Supported 00:12:19.986 ANA Change Notices: Not Supported 00:12:19.986 PLE Aggregate Log Change Notices: Not Supported 00:12:19.986 LBA Status Info Alert Notices: Not Supported 00:12:19.986 EGE Aggregate Log Change Notices: Not Supported 00:12:19.986 Normal NVM Subsystem Shutdown event: Not Supported 00:12:19.986 Zone Descriptor Change Notices: Not Supported 00:12:19.986 Discovery Log Change Notices: Not Supported 00:12:19.986 Controller Attributes 00:12:19.986 128-bit Host Identifier: Supported 00:12:19.986 Non-Operational Permissive Mode: Not Supported 00:12:19.986 NVM Sets: Not Supported 00:12:19.986 Read Recovery Levels: Not Supported 00:12:19.986 Endurance Groups: Not Supported 00:12:19.986 Predictable Latency Mode: Not Supported 00:12:19.986 Traffic Based Keep ALive: Not Supported 00:12:19.986 Namespace Granularity: Not Supported 00:12:19.986 SQ Associations: Not Supported 00:12:19.986 UUID List: Not Supported 00:12:19.986 Multi-Domain Subsystem: Not Supported 00:12:19.986 Fixed Capacity Management: Not Supported 00:12:19.986 Variable Capacity Management: Not Supported 00:12:19.986 Delete Endurance Group: Not Supported 00:12:19.986 Delete NVM Set: Not Supported 00:12:19.986 Extended LBA Formats Supported: Not Supported 00:12:19.986 Flexible Data Placement Supported: Not Supported 00:12:19.986 00:12:19.986 Controller Memory Buffer Support 00:12:19.986 ================================ 00:12:19.986 Supported: No 00:12:19.986 00:12:19.986 Persistent Memory Region Support 00:12:19.986 ================================ 00:12:19.986 Supported: No 00:12:19.986 00:12:19.986 Admin Command Set Attributes 00:12:19.986 ============================ 00:12:19.986 Security Send/Receive: Not Supported 00:12:19.986 Format NVM: Not Supported 00:12:19.986 Firmware Activate/Download: Not Supported 00:12:19.986 Namespace Management: Not Supported 00:12:19.986 Device Self-Test: Not Supported 00:12:19.986 Directives: Not Supported 00:12:19.986 NVMe-MI: Not Supported 00:12:19.986 Virtualization Management: Not Supported 00:12:19.986 Doorbell Buffer Config: Not Supported 00:12:19.986 Get LBA Status Capability: Not Supported 00:12:19.986 Command & Feature Lockdown Capability: Not Supported 00:12:19.986 Abort Command Limit: 4 00:12:19.986 Async Event Request Limit: 4 00:12:19.986 Number of Firmware Slots: N/A 00:12:19.986 Firmware Slot 1 Read-Only: N/A 00:12:19.986 Firmware Activation Without Reset: N/A 00:12:19.986 Multiple Update Detection Support: N/A 00:12:19.986 Firmware Update Granularity: No Information Provided 00:12:19.986 Per-Namespace SMART Log: No 00:12:19.986 Asymmetric Namespace Access Log Page: Not Supported 00:12:19.986 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:19.986 Command Effects Log Page: Supported 00:12:19.986 Get Log Page Extended Data: Supported 00:12:19.986 Telemetry Log Pages: Not Supported 00:12:19.986 Persistent Event Log Pages: Not Supported 00:12:19.986 Supported Log Pages Log Page: May Support 00:12:19.986 Commands Supported & Effects Log Page: Not Supported 00:12:19.986 Feature Identifiers & Effects Log Page:May Support 00:12:19.986 NVMe-MI Commands & Effects Log Page: May Support 00:12:19.986 Data Area 4 for Telemetry Log: Not Supported 00:12:19.986 Error Log Page Entries Supported: 128 00:12:19.986 Keep Alive: Supported 00:12:19.986 Keep Alive Granularity: 10000 ms 00:12:19.986 00:12:19.986 NVM Command Set Attributes 00:12:19.986 ========================== 00:12:19.986 Submission Queue Entry Size 00:12:19.986 Max: 64 00:12:19.986 Min: 64 00:12:19.986 Completion Queue Entry Size 00:12:19.986 Max: 16 00:12:19.986 Min: 16 00:12:19.986 Number of Namespaces: 32 00:12:19.986 Compare Command: Supported 00:12:19.986 Write Uncorrectable Command: Not Supported 00:12:19.986 Dataset Management Command: Supported 00:12:19.986 Write Zeroes Command: Supported 00:12:19.986 Set Features Save Field: Not Supported 00:12:19.986 Reservations: Not Supported 00:12:19.986 Timestamp: Not Supported 00:12:19.986 Copy: Supported 00:12:19.986 Volatile Write Cache: Present 00:12:19.986 Atomic Write Unit (Normal): 1 00:12:19.986 Atomic Write Unit (PFail): 1 00:12:19.986 Atomic Compare & Write Unit: 1 00:12:19.986 Fused Compare & Write: Supported 00:12:19.986 Scatter-Gather List 00:12:19.986 SGL Command Set: Supported (Dword aligned) 00:12:19.986 SGL Keyed: Not Supported 00:12:19.986 SGL Bit Bucket Descriptor: Not Supported 00:12:19.986 SGL Metadata Pointer: Not Supported 00:12:19.986 Oversized SGL: Not Supported 00:12:19.986 SGL Metadata Address: Not Supported 00:12:19.986 SGL Offset: Not Supported 00:12:19.986 Transport SGL Data Block: Not Supported 00:12:19.986 Replay Protected Memory Block: Not Supported 00:12:19.986 00:12:19.986 Firmware Slot Information 00:12:19.986 ========================= 00:12:19.986 Active slot: 1 00:12:19.986 Slot 1 Firmware Revision: 25.01 00:12:19.986 00:12:19.986 00:12:19.986 Commands Supported and Effects 00:12:19.986 ============================== 00:12:19.986 Admin Commands 00:12:19.986 -------------- 00:12:19.986 Get Log Page (02h): Supported 00:12:19.986 Identify (06h): Supported 00:12:19.986 Abort (08h): Supported 00:12:19.986 Set Features (09h): Supported 00:12:19.986 Get Features (0Ah): Supported 00:12:19.986 Asynchronous Event Request (0Ch): Supported 00:12:19.986 Keep Alive (18h): Supported 00:12:19.986 I/O Commands 00:12:19.986 ------------ 00:12:19.986 Flush (00h): Supported LBA-Change 00:12:19.986 Write (01h): Supported LBA-Change 00:12:19.986 Read (02h): Supported 00:12:19.986 Compare (05h): Supported 00:12:19.986 Write Zeroes (08h): Supported LBA-Change 00:12:19.986 Dataset Management (09h): Supported LBA-Change 00:12:19.986 Copy (19h): Supported LBA-Change 00:12:19.986 00:12:19.986 Error Log 00:12:19.986 ========= 00:12:19.986 00:12:19.986 Arbitration 00:12:19.986 =========== 00:12:19.986 Arbitration Burst: 1 00:12:19.986 00:12:19.986 Power Management 00:12:19.986 ================ 00:12:19.986 Number of Power States: 1 00:12:19.986 Current Power State: Power State #0 00:12:19.986 Power State #0: 00:12:19.986 Max Power: 0.00 W 00:12:19.986 Non-Operational State: Operational 00:12:19.986 Entry Latency: Not Reported 00:12:19.986 Exit Latency: Not Reported 00:12:19.986 Relative Read Throughput: 0 00:12:19.986 Relative Read Latency: 0 00:12:19.986 Relative Write Throughput: 0 00:12:19.986 Relative Write Latency: 0 00:12:19.986 Idle Power: Not Reported 00:12:19.986 Active Power: Not Reported 00:12:19.986 Non-Operational Permissive Mode: Not Supported 00:12:19.987 00:12:19.987 Health Information 00:12:19.987 ================== 00:12:19.987 Critical Warnings: 00:12:19.987 Available Spare Space: OK 00:12:19.987 Temperature: OK 00:12:19.987 Device Reliability: OK 00:12:19.987 Read Only: No 00:12:19.987 Volatile Memory Backup: OK 00:12:19.987 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:19.987 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:19.987 Available Spare: 0% 00:12:19.987 Available Sp[2024-11-15 11:31:00.237669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:19.987 [2024-11-15 11:31:00.237705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:19.987 [2024-11-15 11:31:00.237746] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:12:19.987 [2024-11-15 11:31:00.237764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.987 [2024-11-15 11:31:00.237775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.987 [2024-11-15 11:31:00.237785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.987 [2024-11-15 11:31:00.237794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.987 [2024-11-15 11:31:00.240316] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:19.987 [2024-11-15 11:31:00.240339] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:19.987 [2024-11-15 11:31:00.241087] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:19.987 [2024-11-15 11:31:00.241164] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:12:19.987 [2024-11-15 11:31:00.241177] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:12:19.987 [2024-11-15 11:31:00.242094] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:19.987 [2024-11-15 11:31:00.242118] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:12:19.987 [2024-11-15 11:31:00.242172] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:19.987 [2024-11-15 11:31:00.245329] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:19.987 are Threshold: 0% 00:12:19.987 Life Percentage Used: 0% 00:12:19.987 Data Units Read: 0 00:12:19.987 Data Units Written: 0 00:12:19.987 Host Read Commands: 0 00:12:19.987 Host Write Commands: 0 00:12:19.987 Controller Busy Time: 0 minutes 00:12:19.987 Power Cycles: 0 00:12:19.987 Power On Hours: 0 hours 00:12:19.987 Unsafe Shutdowns: 0 00:12:19.987 Unrecoverable Media Errors: 0 00:12:19.987 Lifetime Error Log Entries: 0 00:12:19.987 Warning Temperature Time: 0 minutes 00:12:19.987 Critical Temperature Time: 0 minutes 00:12:19.987 00:12:19.987 Number of Queues 00:12:19.987 ================ 00:12:19.987 Number of I/O Submission Queues: 127 00:12:19.987 Number of I/O Completion Queues: 127 00:12:19.987 00:12:19.987 Active Namespaces 00:12:19.987 ================= 00:12:19.987 Namespace ID:1 00:12:19.987 Error Recovery Timeout: Unlimited 00:12:19.987 Command Set Identifier: NVM (00h) 00:12:19.987 Deallocate: Supported 00:12:19.987 Deallocated/Unwritten Error: Not Supported 00:12:19.987 Deallocated Read Value: Unknown 00:12:19.987 Deallocate in Write Zeroes: Not Supported 00:12:19.987 Deallocated Guard Field: 0xFFFF 00:12:19.987 Flush: Supported 00:12:19.987 Reservation: Supported 00:12:19.987 Namespace Sharing Capabilities: Multiple Controllers 00:12:19.987 Size (in LBAs): 131072 (0GiB) 00:12:19.987 Capacity (in LBAs): 131072 (0GiB) 00:12:19.987 Utilization (in LBAs): 131072 (0GiB) 00:12:19.987 NGUID: D1123A582C72492F9ADA54D1B44B83DE 00:12:19.987 UUID: d1123a58-2c72-492f-9ada-54d1b44b83de 00:12:19.987 Thin Provisioning: Not Supported 00:12:19.987 Per-NS Atomic Units: Yes 00:12:19.987 Atomic Boundary Size (Normal): 0 00:12:19.987 Atomic Boundary Size (PFail): 0 00:12:19.987 Atomic Boundary Offset: 0 00:12:19.987 Maximum Single Source Range Length: 65535 00:12:19.987 Maximum Copy Length: 65535 00:12:19.987 Maximum Source Range Count: 1 00:12:19.987 NGUID/EUI64 Never Reused: No 00:12:19.987 Namespace Write Protected: No 00:12:19.987 Number of LBA Formats: 1 00:12:19.987 Current LBA Format: LBA Format #00 00:12:19.987 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:19.987 00:12:19.987 11:31:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:20.303 [2024-11-15 11:31:00.507281] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:25.591 Initializing NVMe Controllers 00:12:25.591 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:25.591 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:25.591 Initialization complete. Launching workers. 00:12:25.591 ======================================================== 00:12:25.591 Latency(us) 00:12:25.591 Device Information : IOPS MiB/s Average min max 00:12:25.591 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33416.55 130.53 3829.91 1166.25 8089.06 00:12:25.591 ======================================================== 00:12:25.591 Total : 33416.55 130.53 3829.91 1166.25 8089.06 00:12:25.591 00:12:25.591 [2024-11-15 11:31:05.527629] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:25.591 11:31:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:25.591 [2024-11-15 11:31:05.791899] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:30.850 Initializing NVMe Controllers 00:12:30.850 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:30.850 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:30.850 Initialization complete. Launching workers. 00:12:30.850 ======================================================== 00:12:30.850 Latency(us) 00:12:30.850 Device Information : IOPS MiB/s Average min max 00:12:30.850 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15974.40 62.40 8021.03 6970.82 15989.31 00:12:30.850 ======================================================== 00:12:30.850 Total : 15974.40 62.40 8021.03 6970.82 15989.31 00:12:30.850 00:12:30.850 [2024-11-15 11:31:10.828434] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:30.850 11:31:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:30.850 [2024-11-15 11:31:11.056559] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:36.110 [2024-11-15 11:31:16.126650] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:36.110 Initializing NVMe Controllers 00:12:36.110 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:36.110 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:36.110 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:36.110 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:36.110 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:36.110 Initialization complete. Launching workers. 00:12:36.110 Starting thread on core 2 00:12:36.110 Starting thread on core 3 00:12:36.110 Starting thread on core 1 00:12:36.110 11:31:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:36.110 [2024-11-15 11:31:16.443779] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:39.388 [2024-11-15 11:31:19.509717] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:39.388 Initializing NVMe Controllers 00:12:39.388 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:39.388 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:39.388 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:39.388 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:39.388 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:39.388 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:39.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:39.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:39.388 Initialization complete. Launching workers. 00:12:39.388 Starting thread on core 1 with urgent priority queue 00:12:39.388 Starting thread on core 2 with urgent priority queue 00:12:39.388 Starting thread on core 3 with urgent priority queue 00:12:39.388 Starting thread on core 0 with urgent priority queue 00:12:39.388 SPDK bdev Controller (SPDK1 ) core 0: 5237.67 IO/s 19.09 secs/100000 ios 00:12:39.388 SPDK bdev Controller (SPDK1 ) core 1: 5425.67 IO/s 18.43 secs/100000 ios 00:12:39.388 SPDK bdev Controller (SPDK1 ) core 2: 5186.00 IO/s 19.28 secs/100000 ios 00:12:39.388 SPDK bdev Controller (SPDK1 ) core 3: 5091.33 IO/s 19.64 secs/100000 ios 00:12:39.388 ======================================================== 00:12:39.388 00:12:39.388 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:39.645 [2024-11-15 11:31:19.834832] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:39.645 Initializing NVMe Controllers 00:12:39.645 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:39.645 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:39.645 Namespace ID: 1 size: 0GB 00:12:39.645 Initialization complete. 00:12:39.645 INFO: using host memory buffer for IO 00:12:39.645 Hello world! 00:12:39.645 [2024-11-15 11:31:19.868504] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:39.645 11:31:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:39.902 [2024-11-15 11:31:20.197900] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:40.833 Initializing NVMe Controllers 00:12:40.833 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:40.833 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:40.833 Initialization complete. Launching workers. 00:12:40.833 submit (in ns) avg, min, max = 7971.9, 3566.7, 4021258.9 00:12:40.833 complete (in ns) avg, min, max = 26231.0, 2078.9, 6013277.8 00:12:40.833 00:12:40.833 Submit histogram 00:12:40.833 ================ 00:12:40.833 Range in us Cumulative Count 00:12:40.833 3.556 - 3.579: 0.4228% ( 55) 00:12:40.833 3.579 - 3.603: 4.4668% ( 526) 00:12:40.833 3.603 - 3.627: 11.9782% ( 977) 00:12:40.833 3.627 - 3.650: 25.0404% ( 1699) 00:12:40.833 3.650 - 3.674: 34.5583% ( 1238) 00:12:40.833 3.674 - 3.698: 42.1773% ( 991) 00:12:40.833 3.698 - 3.721: 48.2202% ( 786) 00:12:40.833 3.721 - 3.745: 53.5558% ( 694) 00:12:40.833 3.745 - 3.769: 59.1066% ( 722) 00:12:40.833 3.769 - 3.793: 63.2275% ( 536) 00:12:40.833 3.793 - 3.816: 66.2566% ( 394) 00:12:40.833 3.816 - 3.840: 68.9167% ( 346) 00:12:40.833 3.840 - 3.864: 72.5994% ( 479) 00:12:40.833 3.864 - 3.887: 76.7587% ( 541) 00:12:40.833 3.887 - 3.911: 81.0410% ( 557) 00:12:40.833 3.911 - 3.935: 84.2777% ( 421) 00:12:40.833 3.935 - 3.959: 86.3612% ( 271) 00:12:40.833 3.959 - 3.982: 88.2986% ( 252) 00:12:40.833 3.982 - 4.006: 90.0976% ( 234) 00:12:40.833 4.006 - 4.030: 91.2432% ( 149) 00:12:40.833 4.030 - 4.053: 92.1658% ( 120) 00:12:40.833 4.053 - 4.077: 93.0422% ( 114) 00:12:40.833 4.077 - 4.101: 93.7188% ( 88) 00:12:40.833 4.101 - 4.124: 94.3184% ( 78) 00:12:40.833 4.124 - 4.148: 94.9181% ( 78) 00:12:40.833 4.148 - 4.172: 95.5101% ( 77) 00:12:40.833 4.172 - 4.196: 95.9022% ( 51) 00:12:40.833 4.196 - 4.219: 96.1329% ( 30) 00:12:40.833 4.219 - 4.243: 96.3404% ( 27) 00:12:40.833 4.243 - 4.267: 96.4481% ( 14) 00:12:40.833 4.267 - 4.290: 96.6172% ( 22) 00:12:40.833 4.290 - 4.314: 96.7556% ( 18) 00:12:40.833 4.314 - 4.338: 96.8632% ( 14) 00:12:40.833 4.338 - 4.361: 96.9401% ( 10) 00:12:40.833 4.361 - 4.385: 97.0324% ( 12) 00:12:40.833 4.385 - 4.409: 97.0939% ( 8) 00:12:40.833 4.409 - 4.433: 97.1554% ( 8) 00:12:40.833 4.433 - 4.456: 97.2015% ( 6) 00:12:40.833 4.456 - 4.480: 97.2169% ( 2) 00:12:40.833 4.480 - 4.504: 97.2476% ( 4) 00:12:40.833 4.504 - 4.527: 97.2553% ( 1) 00:12:40.833 4.527 - 4.551: 97.2707% ( 2) 00:12:40.833 4.575 - 4.599: 97.2861% ( 2) 00:12:40.833 4.599 - 4.622: 97.3015% ( 2) 00:12:40.833 4.646 - 4.670: 97.3168% ( 2) 00:12:40.833 4.670 - 4.693: 97.3322% ( 2) 00:12:40.833 4.693 - 4.717: 97.3476% ( 2) 00:12:40.833 4.717 - 4.741: 97.3630% ( 2) 00:12:40.833 4.741 - 4.764: 97.3937% ( 4) 00:12:40.833 4.764 - 4.788: 97.4398% ( 6) 00:12:40.833 4.788 - 4.812: 97.4860% ( 6) 00:12:40.833 4.812 - 4.836: 97.5090% ( 3) 00:12:40.833 4.836 - 4.859: 97.5936% ( 11) 00:12:40.833 4.859 - 4.883: 97.6397% ( 6) 00:12:40.833 4.883 - 4.907: 97.6859% ( 6) 00:12:40.833 4.907 - 4.930: 97.7551% ( 9) 00:12:40.833 4.930 - 4.954: 97.8012% ( 6) 00:12:40.833 4.954 - 4.978: 97.8473% ( 6) 00:12:40.833 4.978 - 5.001: 97.9088% ( 8) 00:12:40.833 5.001 - 5.025: 97.9319% ( 3) 00:12:40.833 5.025 - 5.049: 97.9780% ( 6) 00:12:40.833 5.049 - 5.073: 98.0165% ( 5) 00:12:40.833 5.073 - 5.096: 98.0395% ( 3) 00:12:40.833 5.096 - 5.120: 98.0472% ( 1) 00:12:40.833 5.144 - 5.167: 98.0626% ( 2) 00:12:40.833 5.167 - 5.191: 98.1087% ( 6) 00:12:40.833 5.215 - 5.239: 98.1164% ( 1) 00:12:40.833 5.239 - 5.262: 98.1241% ( 1) 00:12:40.833 5.262 - 5.286: 98.1318% ( 1) 00:12:40.833 5.286 - 5.310: 98.1472% ( 2) 00:12:40.833 5.310 - 5.333: 98.1548% ( 1) 00:12:40.833 5.333 - 5.357: 98.1702% ( 2) 00:12:40.833 5.357 - 5.381: 98.1779% ( 1) 00:12:40.833 5.404 - 5.428: 98.1856% ( 1) 00:12:40.833 5.641 - 5.665: 98.1933% ( 1) 00:12:40.833 5.736 - 5.760: 98.2010% ( 1) 00:12:40.833 5.855 - 5.879: 98.2087% ( 1) 00:12:40.833 5.902 - 5.926: 98.2163% ( 1) 00:12:40.833 6.116 - 6.163: 98.2240% ( 1) 00:12:40.833 6.258 - 6.305: 98.2317% ( 1) 00:12:40.833 6.590 - 6.637: 98.2394% ( 1) 00:12:40.833 6.732 - 6.779: 98.2471% ( 1) 00:12:40.833 7.206 - 7.253: 98.2548% ( 1) 00:12:40.833 7.253 - 7.301: 98.2625% ( 1) 00:12:40.833 7.348 - 7.396: 98.2702% ( 1) 00:12:40.833 7.585 - 7.633: 98.2779% ( 1) 00:12:40.833 7.822 - 7.870: 98.2932% ( 2) 00:12:40.833 7.870 - 7.917: 98.3009% ( 1) 00:12:40.833 7.917 - 7.964: 98.3086% ( 1) 00:12:40.833 7.964 - 8.012: 98.3163% ( 1) 00:12:40.833 8.012 - 8.059: 98.3240% ( 1) 00:12:40.833 8.059 - 8.107: 98.3317% ( 1) 00:12:40.833 8.344 - 8.391: 98.3394% ( 1) 00:12:40.833 8.391 - 8.439: 98.3470% ( 1) 00:12:40.833 8.439 - 8.486: 98.3547% ( 1) 00:12:40.833 8.486 - 8.533: 98.3624% ( 1) 00:12:40.833 8.533 - 8.581: 98.3701% ( 1) 00:12:40.833 8.581 - 8.628: 98.3778% ( 1) 00:12:40.833 8.676 - 8.723: 98.3855% ( 1) 00:12:40.833 8.723 - 8.770: 98.4009% ( 2) 00:12:40.833 8.770 - 8.818: 98.4085% ( 1) 00:12:40.833 8.818 - 8.865: 98.4162% ( 1) 00:12:40.833 8.865 - 8.913: 98.4239% ( 1) 00:12:40.833 8.913 - 8.960: 98.4393% ( 2) 00:12:40.833 8.960 - 9.007: 98.4547% ( 2) 00:12:40.833 9.007 - 9.055: 98.4777% ( 3) 00:12:40.833 9.102 - 9.150: 98.4931% ( 2) 00:12:40.833 9.292 - 9.339: 98.5008% ( 1) 00:12:40.833 9.339 - 9.387: 98.5085% ( 1) 00:12:40.833 9.434 - 9.481: 98.5239% ( 2) 00:12:40.833 9.529 - 9.576: 98.5316% ( 1) 00:12:40.833 9.576 - 9.624: 98.5469% ( 2) 00:12:40.833 9.624 - 9.671: 98.5546% ( 1) 00:12:40.833 9.671 - 9.719: 98.5623% ( 1) 00:12:40.833 9.719 - 9.766: 98.5700% ( 1) 00:12:40.833 9.861 - 9.908: 98.5777% ( 1) 00:12:40.833 9.908 - 9.956: 98.5931% ( 2) 00:12:40.833 9.956 - 10.003: 98.6008% ( 1) 00:12:40.833 10.050 - 10.098: 98.6161% ( 2) 00:12:40.833 10.240 - 10.287: 98.6315% ( 2) 00:12:40.833 10.287 - 10.335: 98.6392% ( 1) 00:12:40.833 10.382 - 10.430: 98.6469% ( 1) 00:12:40.833 10.430 - 10.477: 98.6623% ( 2) 00:12:40.833 10.572 - 10.619: 98.6699% ( 1) 00:12:40.833 10.619 - 10.667: 98.6853% ( 2) 00:12:40.833 10.951 - 10.999: 98.6930% ( 1) 00:12:40.833 11.046 - 11.093: 98.7007% ( 1) 00:12:40.833 11.093 - 11.141: 98.7084% ( 1) 00:12:40.833 11.236 - 11.283: 98.7161% ( 1) 00:12:40.833 11.425 - 11.473: 98.7238% ( 1) 00:12:40.833 11.804 - 11.852: 98.7315% ( 1) 00:12:40.833 11.947 - 11.994: 98.7391% ( 1) 00:12:40.833 12.041 - 12.089: 98.7468% ( 1) 00:12:40.833 12.136 - 12.231: 98.7545% ( 1) 00:12:40.833 12.231 - 12.326: 98.7622% ( 1) 00:12:40.833 12.326 - 12.421: 98.7699% ( 1) 00:12:40.833 12.421 - 12.516: 98.7776% ( 1) 00:12:40.833 12.516 - 12.610: 98.7853% ( 1) 00:12:40.833 12.610 - 12.705: 98.8006% ( 2) 00:12:40.833 12.705 - 12.800: 98.8083% ( 1) 00:12:40.833 12.895 - 12.990: 98.8237% ( 2) 00:12:40.833 14.222 - 14.317: 98.8391% ( 2) 00:12:40.833 14.317 - 14.412: 98.8545% ( 2) 00:12:40.833 14.507 - 14.601: 98.8698% ( 2) 00:12:40.833 16.972 - 17.067: 98.8775% ( 1) 00:12:40.833 17.067 - 17.161: 98.8929% ( 2) 00:12:40.833 17.161 - 17.256: 98.9006% ( 1) 00:12:40.833 17.256 - 17.351: 98.9160% ( 2) 00:12:40.833 17.351 - 17.446: 98.9390% ( 3) 00:12:40.833 17.446 - 17.541: 98.9775% ( 5) 00:12:40.833 17.541 - 17.636: 99.0236% ( 6) 00:12:40.833 17.636 - 17.730: 99.0851% ( 8) 00:12:40.833 17.730 - 17.825: 99.1466% ( 8) 00:12:40.833 17.825 - 17.920: 99.2542% ( 14) 00:12:40.833 17.920 - 18.015: 99.3081% ( 7) 00:12:40.833 18.015 - 18.110: 99.3619% ( 7) 00:12:40.833 18.110 - 18.204: 99.4080% ( 6) 00:12:40.833 18.204 - 18.299: 99.5156% ( 14) 00:12:40.833 18.299 - 18.394: 99.5618% ( 6) 00:12:40.833 18.394 - 18.489: 99.6617% ( 13) 00:12:40.833 18.489 - 18.584: 99.6848% ( 3) 00:12:40.833 18.584 - 18.679: 99.7078% ( 3) 00:12:40.833 18.679 - 18.773: 99.7463% ( 5) 00:12:40.833 18.773 - 18.868: 99.7770% ( 4) 00:12:40.833 18.868 - 18.963: 99.8078% ( 4) 00:12:40.833 18.963 - 19.058: 99.8155% ( 1) 00:12:40.833 19.058 - 19.153: 99.8232% ( 1) 00:12:40.833 19.437 - 19.532: 99.8385% ( 2) 00:12:40.833 19.532 - 19.627: 99.8462% ( 1) 00:12:40.833 19.627 - 19.721: 99.8539% ( 1) 00:12:40.833 20.101 - 20.196: 99.8616% ( 1) 00:12:40.833 20.196 - 20.290: 99.8693% ( 1) 00:12:40.833 21.807 - 21.902: 99.8770% ( 1) 00:12:40.833 23.324 - 23.419: 99.8847% ( 1) 00:12:40.833 23.893 - 23.988: 99.8924% ( 1) 00:12:40.833 31.099 - 31.289: 99.9001% ( 1) 00:12:40.833 3980.705 - 4004.978: 99.9616% ( 8) 00:12:40.833 4004.978 - 4029.250: 100.0000% ( 5) 00:12:40.833 00:12:40.833 Complete histogram 00:12:40.833 ================== 00:12:40.833 Range in us Cumulative Count 00:12:40.833 2.074 - 2.086: 2.1758% ( 283) 00:12:40.833 2.086 - 2.098: 34.0509% ( 4146) 00:12:40.833 2.098 - 2.110: 45.1372% ( 1442) 00:12:40.833 2.110 - 2.121: 48.6738% ( 460) 00:12:40.833 2.121 - 2.133: 55.3394% ( 867) 00:12:40.833 2.133 - 2.145: 57.0923% ( 228) 00:12:40.833 2.145 - 2.157: 61.7975% ( 612) 00:12:40.833 2.157 - 2.169: 75.1826% ( 1741) 00:12:40.833 2.169 - 2.181: 77.9965% ( 366) 00:12:40.833 2.181 - 2.193: 79.7724% ( 231) 00:12:40.833 2.193 - 2.204: 82.3557% ( 336) 00:12:40.833 2.204 - 2.216: 82.9861% ( 82) 00:12:40.833 2.216 - 2.228: 84.7005% ( 223) 00:12:40.833 2.228 - 2.240: 88.3140% ( 470) 00:12:40.834 2.240 - 2.252: 90.9664% ( 345) 00:12:40.834 2.252 - 2.264: 92.4425% ( 192) 00:12:40.834 2.264 - 2.276: 93.2652% ( 107) 00:12:40.834 2.276 - 2.287: 93.5881% ( 42) 00:12:40.834 2.287 - 2.299: 93.8495% ( 34) 00:12:40.834 2.299 - 2.311: 94.1954% ( 45) 00:12:40.834 2.311 - 2.323: 94.9489% ( 98) 00:12:40.834 2.323 - 2.335: 95.3871% ( 57) 00:12:40.834 2.335 - 2.347: 95.5332% ( 19) 00:12:40.834 2.347 - 2.359: 95.5947% ( 8) 00:12:40.834 2.359 - 2.370: 95.6331% ( 5) 00:12:40.834 2.370 - 2.382: 95.7023% ( 9) 00:12:40.834 2.382 - 2.394: 95.8945% ( 25) 00:12:40.834 2.394 - 2.406: 96.2482% ( 46) 00:12:40.834 2.406 - 2.418: 96.4788% ( 30) 00:12:40.834 2.418 - 2.430: 96.6941% ( 28) 00:12:40.834 2.430 - 2.441: 96.8786% ( 24) 00:12:40.834 2.441 - 2.453: 97.0016% ( 16) 00:12:40.834 2.453 - 2.465: 97.1169% ( 15) 00:12:40.834 2.465 - 2.477: 97.2553% ( 18) 00:12:40.834 2.477 - 2.489: 97.4629% ( 27) 00:12:40.834 2.489 - 2.501: 97.6320% ( 22) 00:12:40.834 2.501 - 2.513: 97.7704% ( 18) 00:12:40.834 2.513 - 2.524: 97.8781% ( 14) 00:12:40.834 2.524 - 2.536: 97.9626% ( 11) 00:12:40.834 2.536 - 2.548: 98.0703% ( 14) 00:12:40.834 2.548 - 2.560: 98.1164% ( 6) 00:12:40.834 2.560 - 2.572: 98.1625% ( 6) 00:12:40.834 2.572 - 2.584: 98.2010% ( 5) 00:12:40.834 2.584 - 2.596: 98.2548% ( 7) 00:12:40.834 2.596 - 2.607: 98.2702% ( 2) 00:12:40.834 2.607 - 2.619: 98.2779% ( 1) 00:12:40.834 2.619 - 2.631: 98.3009% ( 3) 00:12:40.834 2.631 - 2.643: 98.3240% ( 3) 00:12:40.834 2.643 - 2.655: 98.3394% ( 2) 00:12:40.834 2.655 - 2.667: 98.3470% ( 1) 00:12:40.834 2.667 - 2.679: 98.3547% ( 1) 00:12:40.834 2.679 - 2.690: 98.3778% ( 3) 00:12:40.834 2.714 - 2.726: 98.3855% ( 1) 00:12:40.834 2.785 - 2.797: 98.3932% ( 1) 00:12:40.834 2.821 - 2.833: 98.4009% ( 1) 00:12:40.834 3.034 - 3.058: 98.4085% ( 1) 00:12:40.834 3.342 - 3.366: 98.4162% ( 1) 00:12:40.834 3.390 - 3.413: 98.4316% ( 2) 00:12:40.834 3.437 - 3.461: 98.4470% ( 2) 00:12:40.834 3.508 - 3.532: 98.4547% ( 1) 00:12:40.834 3.532 - 3.556: 98.4701% ( 2) 00:12:40.834 3.556 - 3.579: 98.4854% ( 2) 00:12:40.834 3.579 - 3.603: 98.4931% ( 1) 00:12:40.834 3.603 - 3.627: 98.5008% ( 1) 00:12:40.834 3.627 - 3.650: 98.5162% ( 2) 00:12:40.834 3.650 - 3.674: 98.5239% ( 1) 00:12:40.834 3.674 - 3.698: 98.5316% ( 1) 00:12:40.834 3.721 - 3.745: 98.5392% ( 1) 00:12:40.834 3.816 - 3.840: 98.5546% ( 2) 00:12:40.834 3.840 - 3.864: 98.5623% ( 1) 00:12:40.834 3.887 - 3.911: 98.5700% ( 1) 00:12:40.834 3.911 - 3.935: 98.6008% ( 4) 00:12:40.834 3.935 - 3.959: 98.6161% ( 2) 00:12:40.834 4.030 - 4.053: 9[2024-11-15 11:31:21.221247] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:41.091 8.6238% ( 1) 00:12:41.091 4.077 - 4.101: 98.6315% ( 1) 00:12:41.091 4.124 - 4.148: 98.6469% ( 2) 00:12:41.091 4.148 - 4.172: 98.6546% ( 1) 00:12:41.091 6.044 - 6.068: 98.6623% ( 1) 00:12:41.091 6.353 - 6.400: 98.6699% ( 1) 00:12:41.091 6.400 - 6.447: 98.6776% ( 1) 00:12:41.091 6.590 - 6.637: 98.6853% ( 1) 00:12:41.091 6.637 - 6.684: 98.6930% ( 1) 00:12:41.091 6.684 - 6.732: 98.7007% ( 1) 00:12:41.091 6.921 - 6.969: 98.7084% ( 1) 00:12:41.091 6.969 - 7.016: 98.7161% ( 1) 00:12:41.091 7.159 - 7.206: 98.7238% ( 1) 00:12:41.091 7.206 - 7.253: 98.7315% ( 1) 00:12:41.091 7.443 - 7.490: 98.7545% ( 3) 00:12:41.091 7.538 - 7.585: 98.7699% ( 2) 00:12:41.091 7.585 - 7.633: 98.7853% ( 2) 00:12:41.091 8.012 - 8.059: 98.7930% ( 1) 00:12:41.091 8.059 - 8.107: 98.8006% ( 1) 00:12:41.091 8.201 - 8.249: 98.8083% ( 1) 00:12:41.091 8.439 - 8.486: 98.8160% ( 1) 00:12:41.091 8.486 - 8.533: 98.8237% ( 1) 00:12:41.091 8.581 - 8.628: 98.8314% ( 1) 00:12:41.091 9.766 - 9.813: 98.8391% ( 1) 00:12:41.091 10.430 - 10.477: 98.8468% ( 1) 00:12:41.091 12.326 - 12.421: 98.8545% ( 1) 00:12:41.091 12.705 - 12.800: 98.8622% ( 1) 00:12:41.091 15.455 - 15.550: 98.8698% ( 1) 00:12:41.091 15.550 - 15.644: 98.8775% ( 1) 00:12:41.091 15.834 - 15.929: 98.8852% ( 1) 00:12:41.091 15.929 - 16.024: 98.9237% ( 5) 00:12:41.091 16.024 - 16.119: 98.9390% ( 2) 00:12:41.091 16.213 - 16.308: 98.9621% ( 3) 00:12:41.091 16.308 - 16.403: 99.0082% ( 6) 00:12:41.091 16.403 - 16.498: 99.0620% ( 7) 00:12:41.091 16.498 - 16.593: 99.0928% ( 4) 00:12:41.091 16.593 - 16.687: 99.1235% ( 4) 00:12:41.091 16.687 - 16.782: 99.1389% ( 2) 00:12:41.091 16.782 - 16.877: 99.2312% ( 12) 00:12:41.091 16.877 - 16.972: 99.2619% ( 4) 00:12:41.091 16.972 - 17.067: 99.2773% ( 2) 00:12:41.091 17.067 - 17.161: 99.3004% ( 3) 00:12:41.091 17.161 - 17.256: 99.3158% ( 2) 00:12:41.091 17.256 - 17.351: 99.3311% ( 2) 00:12:41.091 17.351 - 17.446: 99.3388% ( 1) 00:12:41.091 17.446 - 17.541: 99.3465% ( 1) 00:12:41.091 17.541 - 17.636: 99.3542% ( 1) 00:12:41.091 17.825 - 17.920: 99.3619% ( 1) 00:12:41.091 17.920 - 18.015: 99.3773% ( 2) 00:12:41.091 18.489 - 18.584: 99.3849% ( 1) 00:12:41.091 18.773 - 18.868: 99.3926% ( 1) 00:12:41.091 23.609 - 23.704: 99.4003% ( 1) 00:12:41.091 2026.761 - 2038.898: 99.4080% ( 1) 00:12:41.091 3980.705 - 4004.978: 99.9308% ( 68) 00:12:41.091 4004.978 - 4029.250: 99.9923% ( 8) 00:12:41.091 5995.330 - 6019.603: 100.0000% ( 1) 00:12:41.091 00:12:41.091 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:41.091 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:41.091 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:41.092 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:41.092 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:41.348 [ 00:12:41.348 { 00:12:41.348 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:41.348 "subtype": "Discovery", 00:12:41.348 "listen_addresses": [], 00:12:41.348 "allow_any_host": true, 00:12:41.348 "hosts": [] 00:12:41.348 }, 00:12:41.348 { 00:12:41.348 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:41.348 "subtype": "NVMe", 00:12:41.348 "listen_addresses": [ 00:12:41.348 { 00:12:41.348 "trtype": "VFIOUSER", 00:12:41.348 "adrfam": "IPv4", 00:12:41.348 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:41.348 "trsvcid": "0" 00:12:41.348 } 00:12:41.348 ], 00:12:41.348 "allow_any_host": true, 00:12:41.348 "hosts": [], 00:12:41.348 "serial_number": "SPDK1", 00:12:41.348 "model_number": "SPDK bdev Controller", 00:12:41.348 "max_namespaces": 32, 00:12:41.348 "min_cntlid": 1, 00:12:41.348 "max_cntlid": 65519, 00:12:41.348 "namespaces": [ 00:12:41.348 { 00:12:41.348 "nsid": 1, 00:12:41.348 "bdev_name": "Malloc1", 00:12:41.348 "name": "Malloc1", 00:12:41.348 "nguid": "D1123A582C72492F9ADA54D1B44B83DE", 00:12:41.348 "uuid": "d1123a58-2c72-492f-9ada-54d1b44b83de" 00:12:41.348 } 00:12:41.348 ] 00:12:41.348 }, 00:12:41.348 { 00:12:41.348 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:41.348 "subtype": "NVMe", 00:12:41.348 "listen_addresses": [ 00:12:41.348 { 00:12:41.348 "trtype": "VFIOUSER", 00:12:41.348 "adrfam": "IPv4", 00:12:41.348 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:41.348 "trsvcid": "0" 00:12:41.348 } 00:12:41.348 ], 00:12:41.348 "allow_any_host": true, 00:12:41.348 "hosts": [], 00:12:41.348 "serial_number": "SPDK2", 00:12:41.348 "model_number": "SPDK bdev Controller", 00:12:41.348 "max_namespaces": 32, 00:12:41.348 "min_cntlid": 1, 00:12:41.348 "max_cntlid": 65519, 00:12:41.348 "namespaces": [ 00:12:41.348 { 00:12:41.348 "nsid": 1, 00:12:41.348 "bdev_name": "Malloc2", 00:12:41.348 "name": "Malloc2", 00:12:41.348 "nguid": "70E18F30551E4A2C907E00E709E9488B", 00:12:41.348 "uuid": "70e18f30-551e-4a2c-907e-00e709e9488b" 00:12:41.348 } 00:12:41.348 ] 00:12:41.348 } 00:12:41.348 ] 00:12:41.348 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:41.348 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2903498 00:12:41.348 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:41.348 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:41.348 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:12:41.348 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:41.348 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:41.348 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:12:41.348 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:41.348 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:41.348 [2024-11-15 11:31:21.770816] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:41.606 Malloc3 00:12:41.606 11:31:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:41.863 [2024-11-15 11:31:22.172726] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:41.863 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:41.863 Asynchronous Event Request test 00:12:41.863 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:41.863 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:41.863 Registering asynchronous event callbacks... 00:12:41.863 Starting namespace attribute notice tests for all controllers... 00:12:41.863 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:41.863 aer_cb - Changed Namespace 00:12:41.863 Cleaning up... 00:12:42.120 [ 00:12:42.120 { 00:12:42.120 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:42.120 "subtype": "Discovery", 00:12:42.120 "listen_addresses": [], 00:12:42.120 "allow_any_host": true, 00:12:42.120 "hosts": [] 00:12:42.120 }, 00:12:42.120 { 00:12:42.120 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:42.120 "subtype": "NVMe", 00:12:42.120 "listen_addresses": [ 00:12:42.120 { 00:12:42.120 "trtype": "VFIOUSER", 00:12:42.120 "adrfam": "IPv4", 00:12:42.120 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:42.120 "trsvcid": "0" 00:12:42.120 } 00:12:42.120 ], 00:12:42.120 "allow_any_host": true, 00:12:42.120 "hosts": [], 00:12:42.120 "serial_number": "SPDK1", 00:12:42.120 "model_number": "SPDK bdev Controller", 00:12:42.120 "max_namespaces": 32, 00:12:42.120 "min_cntlid": 1, 00:12:42.120 "max_cntlid": 65519, 00:12:42.120 "namespaces": [ 00:12:42.120 { 00:12:42.120 "nsid": 1, 00:12:42.120 "bdev_name": "Malloc1", 00:12:42.120 "name": "Malloc1", 00:12:42.120 "nguid": "D1123A582C72492F9ADA54D1B44B83DE", 00:12:42.120 "uuid": "d1123a58-2c72-492f-9ada-54d1b44b83de" 00:12:42.120 }, 00:12:42.120 { 00:12:42.120 "nsid": 2, 00:12:42.120 "bdev_name": "Malloc3", 00:12:42.120 "name": "Malloc3", 00:12:42.120 "nguid": "09EF99DF172A43AEA5AA480D6BAE5828", 00:12:42.120 "uuid": "09ef99df-172a-43ae-a5aa-480d6bae5828" 00:12:42.120 } 00:12:42.120 ] 00:12:42.120 }, 00:12:42.120 { 00:12:42.120 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:42.120 "subtype": "NVMe", 00:12:42.120 "listen_addresses": [ 00:12:42.120 { 00:12:42.120 "trtype": "VFIOUSER", 00:12:42.120 "adrfam": "IPv4", 00:12:42.120 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:42.120 "trsvcid": "0" 00:12:42.120 } 00:12:42.121 ], 00:12:42.121 "allow_any_host": true, 00:12:42.121 "hosts": [], 00:12:42.121 "serial_number": "SPDK2", 00:12:42.121 "model_number": "SPDK bdev Controller", 00:12:42.121 "max_namespaces": 32, 00:12:42.121 "min_cntlid": 1, 00:12:42.121 "max_cntlid": 65519, 00:12:42.121 "namespaces": [ 00:12:42.121 { 00:12:42.121 "nsid": 1, 00:12:42.121 "bdev_name": "Malloc2", 00:12:42.121 "name": "Malloc2", 00:12:42.121 "nguid": "70E18F30551E4A2C907E00E709E9488B", 00:12:42.121 "uuid": "70e18f30-551e-4a2c-907e-00e709e9488b" 00:12:42.121 } 00:12:42.121 ] 00:12:42.121 } 00:12:42.121 ] 00:12:42.121 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2903498 00:12:42.121 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:42.121 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:42.121 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:42.121 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:42.121 [2024-11-15 11:31:22.479270] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:12:42.121 [2024-11-15 11:31:22.479338] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903637 ] 00:12:42.121 [2024-11-15 11:31:22.529202] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:42.121 [2024-11-15 11:31:22.533602] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:42.121 [2024-11-15 11:31:22.533636] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f40100dd000 00:12:42.121 [2024-11-15 11:31:22.534587] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.121 [2024-11-15 11:31:22.535596] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.121 [2024-11-15 11:31:22.536619] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.121 [2024-11-15 11:31:22.537625] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:42.121 [2024-11-15 11:31:22.538631] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:42.121 [2024-11-15 11:31:22.539637] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.121 [2024-11-15 11:31:22.540644] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:42.121 [2024-11-15 11:31:22.541669] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.121 [2024-11-15 11:31:22.542663] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:42.121 [2024-11-15 11:31:22.542699] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f40100d2000 00:12:42.121 [2024-11-15 11:31:22.543860] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:42.379 [2024-11-15 11:31:22.562764] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:42.379 [2024-11-15 11:31:22.562801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:12:42.379 [2024-11-15 11:31:22.564897] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:42.379 [2024-11-15 11:31:22.564950] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:42.379 [2024-11-15 11:31:22.565037] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:12:42.379 [2024-11-15 11:31:22.565060] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:12:42.379 [2024-11-15 11:31:22.565071] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:12:42.379 [2024-11-15 11:31:22.565906] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:42.379 [2024-11-15 11:31:22.565927] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:12:42.379 [2024-11-15 11:31:22.565939] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:12:42.379 [2024-11-15 11:31:22.566909] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:42.379 [2024-11-15 11:31:22.566930] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:12:42.379 [2024-11-15 11:31:22.566944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:42.379 [2024-11-15 11:31:22.567911] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:42.379 [2024-11-15 11:31:22.567931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:42.379 [2024-11-15 11:31:22.568915] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:42.379 [2024-11-15 11:31:22.568935] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:42.379 [2024-11-15 11:31:22.568944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:42.379 [2024-11-15 11:31:22.568955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:42.379 [2024-11-15 11:31:22.569065] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:12:42.379 [2024-11-15 11:31:22.569073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:42.379 [2024-11-15 11:31:22.569081] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:42.379 [2024-11-15 11:31:22.569923] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:42.379 [2024-11-15 11:31:22.570925] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:42.379 [2024-11-15 11:31:22.571936] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:42.379 [2024-11-15 11:31:22.572936] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:42.379 [2024-11-15 11:31:22.573007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:42.379 [2024-11-15 11:31:22.573959] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:42.379 [2024-11-15 11:31:22.573982] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:42.379 [2024-11-15 11:31:22.573993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.574016] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:12:42.379 [2024-11-15 11:31:22.574033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.574054] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:42.379 [2024-11-15 11:31:22.574064] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:42.379 [2024-11-15 11:31:22.574070] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.379 [2024-11-15 11:31:22.574087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:42.379 [2024-11-15 11:31:22.580318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:42.379 [2024-11-15 11:31:22.580341] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:12:42.379 [2024-11-15 11:31:22.580350] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:12:42.379 [2024-11-15 11:31:22.580357] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:12:42.379 [2024-11-15 11:31:22.580365] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:42.379 [2024-11-15 11:31:22.580384] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:12:42.379 [2024-11-15 11:31:22.580393] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:12:42.379 [2024-11-15 11:31:22.580401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.580417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.580433] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:42.379 [2024-11-15 11:31:22.588316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:42.379 [2024-11-15 11:31:22.588340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.379 [2024-11-15 11:31:22.588354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.379 [2024-11-15 11:31:22.588366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.379 [2024-11-15 11:31:22.588378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.379 [2024-11-15 11:31:22.588387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.588399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.588417] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:42.379 [2024-11-15 11:31:22.596315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:42.379 [2024-11-15 11:31:22.596338] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:12:42.379 [2024-11-15 11:31:22.596348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.596360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.596369] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.596383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:42.379 [2024-11-15 11:31:22.604315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:42.379 [2024-11-15 11:31:22.604390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.604407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.604421] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:42.379 [2024-11-15 11:31:22.604429] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:42.379 [2024-11-15 11:31:22.604435] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.379 [2024-11-15 11:31:22.604445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:42.379 [2024-11-15 11:31:22.612312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:42.379 [2024-11-15 11:31:22.612336] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:12:42.379 [2024-11-15 11:31:22.612355] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.612371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.612384] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:42.379 [2024-11-15 11:31:22.612392] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:42.379 [2024-11-15 11:31:22.612398] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.379 [2024-11-15 11:31:22.612408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:42.379 [2024-11-15 11:31:22.620317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:42.379 [2024-11-15 11:31:22.620346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.620362] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.620386] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:42.379 [2024-11-15 11:31:22.620399] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:42.379 [2024-11-15 11:31:22.620406] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.379 [2024-11-15 11:31:22.620416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:42.379 [2024-11-15 11:31:22.628316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:42.379 [2024-11-15 11:31:22.628339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.628351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.628365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.628376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.628384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.628392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.628400] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:42.379 [2024-11-15 11:31:22.628408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:12:42.379 [2024-11-15 11:31:22.628416] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:12:42.379 [2024-11-15 11:31:22.628440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:42.379 [2024-11-15 11:31:22.636315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:42.379 [2024-11-15 11:31:22.636342] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:42.379 [2024-11-15 11:31:22.644311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:42.379 [2024-11-15 11:31:22.644338] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:42.379 [2024-11-15 11:31:22.652314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:42.379 [2024-11-15 11:31:22.652340] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:42.379 [2024-11-15 11:31:22.660312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:42.379 [2024-11-15 11:31:22.660345] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:42.379 [2024-11-15 11:31:22.660357] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:42.379 [2024-11-15 11:31:22.660363] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:42.379 [2024-11-15 11:31:22.660368] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:42.379 [2024-11-15 11:31:22.660374] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:42.380 [2024-11-15 11:31:22.660384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:42.380 [2024-11-15 11:31:22.660401] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:42.380 [2024-11-15 11:31:22.660410] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:42.380 [2024-11-15 11:31:22.660416] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.380 [2024-11-15 11:31:22.660425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:42.380 [2024-11-15 11:31:22.660436] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:42.380 [2024-11-15 11:31:22.660444] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:42.380 [2024-11-15 11:31:22.660450] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.380 [2024-11-15 11:31:22.660459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:42.380 [2024-11-15 11:31:22.660471] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:42.380 [2024-11-15 11:31:22.660479] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:42.380 [2024-11-15 11:31:22.660485] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.380 [2024-11-15 11:31:22.660494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:42.380 [2024-11-15 11:31:22.668328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:42.380 [2024-11-15 11:31:22.668357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:42.380 [2024-11-15 11:31:22.668375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:42.380 [2024-11-15 11:31:22.668388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:42.380 ===================================================== 00:12:42.380 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:42.380 ===================================================== 00:12:42.380 Controller Capabilities/Features 00:12:42.380 ================================ 00:12:42.380 Vendor ID: 4e58 00:12:42.380 Subsystem Vendor ID: 4e58 00:12:42.380 Serial Number: SPDK2 00:12:42.380 Model Number: SPDK bdev Controller 00:12:42.380 Firmware Version: 25.01 00:12:42.380 Recommended Arb Burst: 6 00:12:42.380 IEEE OUI Identifier: 8d 6b 50 00:12:42.380 Multi-path I/O 00:12:42.380 May have multiple subsystem ports: Yes 00:12:42.380 May have multiple controllers: Yes 00:12:42.380 Associated with SR-IOV VF: No 00:12:42.380 Max Data Transfer Size: 131072 00:12:42.380 Max Number of Namespaces: 32 00:12:42.380 Max Number of I/O Queues: 127 00:12:42.380 NVMe Specification Version (VS): 1.3 00:12:42.380 NVMe Specification Version (Identify): 1.3 00:12:42.380 Maximum Queue Entries: 256 00:12:42.380 Contiguous Queues Required: Yes 00:12:42.380 Arbitration Mechanisms Supported 00:12:42.380 Weighted Round Robin: Not Supported 00:12:42.380 Vendor Specific: Not Supported 00:12:42.380 Reset Timeout: 15000 ms 00:12:42.380 Doorbell Stride: 4 bytes 00:12:42.380 NVM Subsystem Reset: Not Supported 00:12:42.380 Command Sets Supported 00:12:42.380 NVM Command Set: Supported 00:12:42.380 Boot Partition: Not Supported 00:12:42.380 Memory Page Size Minimum: 4096 bytes 00:12:42.380 Memory Page Size Maximum: 4096 bytes 00:12:42.380 Persistent Memory Region: Not Supported 00:12:42.380 Optional Asynchronous Events Supported 00:12:42.380 Namespace Attribute Notices: Supported 00:12:42.380 Firmware Activation Notices: Not Supported 00:12:42.380 ANA Change Notices: Not Supported 00:12:42.380 PLE Aggregate Log Change Notices: Not Supported 00:12:42.380 LBA Status Info Alert Notices: Not Supported 00:12:42.380 EGE Aggregate Log Change Notices: Not Supported 00:12:42.380 Normal NVM Subsystem Shutdown event: Not Supported 00:12:42.380 Zone Descriptor Change Notices: Not Supported 00:12:42.380 Discovery Log Change Notices: Not Supported 00:12:42.380 Controller Attributes 00:12:42.380 128-bit Host Identifier: Supported 00:12:42.380 Non-Operational Permissive Mode: Not Supported 00:12:42.380 NVM Sets: Not Supported 00:12:42.380 Read Recovery Levels: Not Supported 00:12:42.380 Endurance Groups: Not Supported 00:12:42.380 Predictable Latency Mode: Not Supported 00:12:42.380 Traffic Based Keep ALive: Not Supported 00:12:42.380 Namespace Granularity: Not Supported 00:12:42.380 SQ Associations: Not Supported 00:12:42.380 UUID List: Not Supported 00:12:42.380 Multi-Domain Subsystem: Not Supported 00:12:42.380 Fixed Capacity Management: Not Supported 00:12:42.380 Variable Capacity Management: Not Supported 00:12:42.380 Delete Endurance Group: Not Supported 00:12:42.380 Delete NVM Set: Not Supported 00:12:42.380 Extended LBA Formats Supported: Not Supported 00:12:42.380 Flexible Data Placement Supported: Not Supported 00:12:42.380 00:12:42.380 Controller Memory Buffer Support 00:12:42.380 ================================ 00:12:42.380 Supported: No 00:12:42.380 00:12:42.380 Persistent Memory Region Support 00:12:42.380 ================================ 00:12:42.380 Supported: No 00:12:42.380 00:12:42.380 Admin Command Set Attributes 00:12:42.380 ============================ 00:12:42.380 Security Send/Receive: Not Supported 00:12:42.380 Format NVM: Not Supported 00:12:42.380 Firmware Activate/Download: Not Supported 00:12:42.380 Namespace Management: Not Supported 00:12:42.380 Device Self-Test: Not Supported 00:12:42.380 Directives: Not Supported 00:12:42.380 NVMe-MI: Not Supported 00:12:42.380 Virtualization Management: Not Supported 00:12:42.380 Doorbell Buffer Config: Not Supported 00:12:42.380 Get LBA Status Capability: Not Supported 00:12:42.380 Command & Feature Lockdown Capability: Not Supported 00:12:42.380 Abort Command Limit: 4 00:12:42.380 Async Event Request Limit: 4 00:12:42.380 Number of Firmware Slots: N/A 00:12:42.380 Firmware Slot 1 Read-Only: N/A 00:12:42.380 Firmware Activation Without Reset: N/A 00:12:42.380 Multiple Update Detection Support: N/A 00:12:42.380 Firmware Update Granularity: No Information Provided 00:12:42.380 Per-Namespace SMART Log: No 00:12:42.380 Asymmetric Namespace Access Log Page: Not Supported 00:12:42.380 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:42.380 Command Effects Log Page: Supported 00:12:42.380 Get Log Page Extended Data: Supported 00:12:42.380 Telemetry Log Pages: Not Supported 00:12:42.380 Persistent Event Log Pages: Not Supported 00:12:42.380 Supported Log Pages Log Page: May Support 00:12:42.380 Commands Supported & Effects Log Page: Not Supported 00:12:42.380 Feature Identifiers & Effects Log Page:May Support 00:12:42.380 NVMe-MI Commands & Effects Log Page: May Support 00:12:42.380 Data Area 4 for Telemetry Log: Not Supported 00:12:42.380 Error Log Page Entries Supported: 128 00:12:42.380 Keep Alive: Supported 00:12:42.380 Keep Alive Granularity: 10000 ms 00:12:42.380 00:12:42.380 NVM Command Set Attributes 00:12:42.380 ========================== 00:12:42.380 Submission Queue Entry Size 00:12:42.380 Max: 64 00:12:42.380 Min: 64 00:12:42.380 Completion Queue Entry Size 00:12:42.380 Max: 16 00:12:42.380 Min: 16 00:12:42.380 Number of Namespaces: 32 00:12:42.380 Compare Command: Supported 00:12:42.380 Write Uncorrectable Command: Not Supported 00:12:42.380 Dataset Management Command: Supported 00:12:42.380 Write Zeroes Command: Supported 00:12:42.380 Set Features Save Field: Not Supported 00:12:42.380 Reservations: Not Supported 00:12:42.380 Timestamp: Not Supported 00:12:42.380 Copy: Supported 00:12:42.380 Volatile Write Cache: Present 00:12:42.380 Atomic Write Unit (Normal): 1 00:12:42.380 Atomic Write Unit (PFail): 1 00:12:42.380 Atomic Compare & Write Unit: 1 00:12:42.380 Fused Compare & Write: Supported 00:12:42.380 Scatter-Gather List 00:12:42.380 SGL Command Set: Supported (Dword aligned) 00:12:42.380 SGL Keyed: Not Supported 00:12:42.380 SGL Bit Bucket Descriptor: Not Supported 00:12:42.380 SGL Metadata Pointer: Not Supported 00:12:42.380 Oversized SGL: Not Supported 00:12:42.380 SGL Metadata Address: Not Supported 00:12:42.380 SGL Offset: Not Supported 00:12:42.381 Transport SGL Data Block: Not Supported 00:12:42.381 Replay Protected Memory Block: Not Supported 00:12:42.381 00:12:42.381 Firmware Slot Information 00:12:42.381 ========================= 00:12:42.381 Active slot: 1 00:12:42.381 Slot 1 Firmware Revision: 25.01 00:12:42.381 00:12:42.381 00:12:42.381 Commands Supported and Effects 00:12:42.381 ============================== 00:12:42.381 Admin Commands 00:12:42.381 -------------- 00:12:42.381 Get Log Page (02h): Supported 00:12:42.381 Identify (06h): Supported 00:12:42.381 Abort (08h): Supported 00:12:42.381 Set Features (09h): Supported 00:12:42.381 Get Features (0Ah): Supported 00:12:42.381 Asynchronous Event Request (0Ch): Supported 00:12:42.381 Keep Alive (18h): Supported 00:12:42.381 I/O Commands 00:12:42.381 ------------ 00:12:42.381 Flush (00h): Supported LBA-Change 00:12:42.381 Write (01h): Supported LBA-Change 00:12:42.381 Read (02h): Supported 00:12:42.381 Compare (05h): Supported 00:12:42.381 Write Zeroes (08h): Supported LBA-Change 00:12:42.381 Dataset Management (09h): Supported LBA-Change 00:12:42.381 Copy (19h): Supported LBA-Change 00:12:42.381 00:12:42.381 Error Log 00:12:42.381 ========= 00:12:42.381 00:12:42.381 Arbitration 00:12:42.381 =========== 00:12:42.381 Arbitration Burst: 1 00:12:42.381 00:12:42.381 Power Management 00:12:42.381 ================ 00:12:42.381 Number of Power States: 1 00:12:42.381 Current Power State: Power State #0 00:12:42.381 Power State #0: 00:12:42.381 Max Power: 0.00 W 00:12:42.381 Non-Operational State: Operational 00:12:42.381 Entry Latency: Not Reported 00:12:42.381 Exit Latency: Not Reported 00:12:42.381 Relative Read Throughput: 0 00:12:42.381 Relative Read Latency: 0 00:12:42.381 Relative Write Throughput: 0 00:12:42.381 Relative Write Latency: 0 00:12:42.381 Idle Power: Not Reported 00:12:42.381 Active Power: Not Reported 00:12:42.381 Non-Operational Permissive Mode: Not Supported 00:12:42.381 00:12:42.381 Health Information 00:12:42.381 ================== 00:12:42.381 Critical Warnings: 00:12:42.381 Available Spare Space: OK 00:12:42.381 Temperature: OK 00:12:42.381 Device Reliability: OK 00:12:42.381 Read Only: No 00:12:42.381 Volatile Memory Backup: OK 00:12:42.381 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:42.381 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:42.381 Available Spare: 0% 00:12:42.381 Available Sp[2024-11-15 11:31:22.668508] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:42.381 [2024-11-15 11:31:22.676328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:42.381 [2024-11-15 11:31:22.676390] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:12:42.381 [2024-11-15 11:31:22.676408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.381 [2024-11-15 11:31:22.676419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.381 [2024-11-15 11:31:22.676429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.381 [2024-11-15 11:31:22.676439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.381 [2024-11-15 11:31:22.676526] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:42.381 [2024-11-15 11:31:22.676547] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:42.381 [2024-11-15 11:31:22.677533] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:42.381 [2024-11-15 11:31:22.677605] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:12:42.381 [2024-11-15 11:31:22.677638] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:12:42.381 [2024-11-15 11:31:22.678542] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:42.381 [2024-11-15 11:31:22.678567] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:12:42.381 [2024-11-15 11:31:22.678637] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:42.381 [2024-11-15 11:31:22.679816] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:42.381 are Threshold: 0% 00:12:42.381 Life Percentage Used: 0% 00:12:42.381 Data Units Read: 0 00:12:42.381 Data Units Written: 0 00:12:42.381 Host Read Commands: 0 00:12:42.381 Host Write Commands: 0 00:12:42.381 Controller Busy Time: 0 minutes 00:12:42.381 Power Cycles: 0 00:12:42.381 Power On Hours: 0 hours 00:12:42.381 Unsafe Shutdowns: 0 00:12:42.381 Unrecoverable Media Errors: 0 00:12:42.381 Lifetime Error Log Entries: 0 00:12:42.381 Warning Temperature Time: 0 minutes 00:12:42.381 Critical Temperature Time: 0 minutes 00:12:42.381 00:12:42.381 Number of Queues 00:12:42.381 ================ 00:12:42.381 Number of I/O Submission Queues: 127 00:12:42.381 Number of I/O Completion Queues: 127 00:12:42.381 00:12:42.381 Active Namespaces 00:12:42.381 ================= 00:12:42.381 Namespace ID:1 00:12:42.381 Error Recovery Timeout: Unlimited 00:12:42.381 Command Set Identifier: NVM (00h) 00:12:42.381 Deallocate: Supported 00:12:42.381 Deallocated/Unwritten Error: Not Supported 00:12:42.381 Deallocated Read Value: Unknown 00:12:42.381 Deallocate in Write Zeroes: Not Supported 00:12:42.381 Deallocated Guard Field: 0xFFFF 00:12:42.381 Flush: Supported 00:12:42.381 Reservation: Supported 00:12:42.381 Namespace Sharing Capabilities: Multiple Controllers 00:12:42.381 Size (in LBAs): 131072 (0GiB) 00:12:42.381 Capacity (in LBAs): 131072 (0GiB) 00:12:42.381 Utilization (in LBAs): 131072 (0GiB) 00:12:42.381 NGUID: 70E18F30551E4A2C907E00E709E9488B 00:12:42.381 UUID: 70e18f30-551e-4a2c-907e-00e709e9488b 00:12:42.381 Thin Provisioning: Not Supported 00:12:42.381 Per-NS Atomic Units: Yes 00:12:42.381 Atomic Boundary Size (Normal): 0 00:12:42.381 Atomic Boundary Size (PFail): 0 00:12:42.381 Atomic Boundary Offset: 0 00:12:42.381 Maximum Single Source Range Length: 65535 00:12:42.381 Maximum Copy Length: 65535 00:12:42.381 Maximum Source Range Count: 1 00:12:42.381 NGUID/EUI64 Never Reused: No 00:12:42.381 Namespace Write Protected: No 00:12:42.381 Number of LBA Formats: 1 00:12:42.381 Current LBA Format: LBA Format #00 00:12:42.381 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:42.381 00:12:42.381 11:31:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:42.638 [2024-11-15 11:31:22.934761] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:47.902 Initializing NVMe Controllers 00:12:47.902 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:47.902 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:47.902 Initialization complete. Launching workers. 00:12:47.902 ======================================================== 00:12:47.902 Latency(us) 00:12:47.902 Device Information : IOPS MiB/s Average min max 00:12:47.902 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34432.74 134.50 3716.57 1175.84 8239.34 00:12:47.902 ======================================================== 00:12:47.902 Total : 34432.74 134.50 3716.57 1175.84 8239.34 00:12:47.902 00:12:47.902 [2024-11-15 11:31:28.036641] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:47.902 11:31:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:47.902 [2024-11-15 11:31:28.304431] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:53.161 Initializing NVMe Controllers 00:12:53.161 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:53.161 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:53.161 Initialization complete. Launching workers. 00:12:53.161 ======================================================== 00:12:53.161 Latency(us) 00:12:53.161 Device Information : IOPS MiB/s Average min max 00:12:53.161 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30925.58 120.80 4139.92 1203.05 10111.65 00:12:53.161 ======================================================== 00:12:53.161 Total : 30925.58 120.80 4139.92 1203.05 10111.65 00:12:53.161 00:12:53.161 [2024-11-15 11:31:33.328696] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:53.161 11:31:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:53.161 [2024-11-15 11:31:33.548085] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:58.418 [2024-11-15 11:31:38.681447] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:58.418 Initializing NVMe Controllers 00:12:58.418 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:58.418 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:58.418 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:58.418 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:58.418 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:58.418 Initialization complete. Launching workers. 00:12:58.418 Starting thread on core 2 00:12:58.418 Starting thread on core 3 00:12:58.418 Starting thread on core 1 00:12:58.419 11:31:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:58.677 [2024-11-15 11:31:39.013780] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:01.955 [2024-11-15 11:31:42.083762] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:01.955 Initializing NVMe Controllers 00:13:01.955 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:01.955 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:01.955 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:01.955 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:01.955 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:01.955 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:01.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:01.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:01.955 Initialization complete. Launching workers. 00:13:01.955 Starting thread on core 1 with urgent priority queue 00:13:01.955 Starting thread on core 2 with urgent priority queue 00:13:01.955 Starting thread on core 3 with urgent priority queue 00:13:01.955 Starting thread on core 0 with urgent priority queue 00:13:01.955 SPDK bdev Controller (SPDK2 ) core 0: 5716.33 IO/s 17.49 secs/100000 ios 00:13:01.955 SPDK bdev Controller (SPDK2 ) core 1: 4933.00 IO/s 20.27 secs/100000 ios 00:13:01.955 SPDK bdev Controller (SPDK2 ) core 2: 5055.67 IO/s 19.78 secs/100000 ios 00:13:01.955 SPDK bdev Controller (SPDK2 ) core 3: 5642.67 IO/s 17.72 secs/100000 ios 00:13:01.955 ======================================================== 00:13:01.955 00:13:01.955 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:02.212 [2024-11-15 11:31:42.406818] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:02.212 Initializing NVMe Controllers 00:13:02.212 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:02.212 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:02.212 Namespace ID: 1 size: 0GB 00:13:02.212 Initialization complete. 00:13:02.212 INFO: using host memory buffer for IO 00:13:02.212 Hello world! 00:13:02.212 [2024-11-15 11:31:42.416015] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:02.212 11:31:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:02.469 [2024-11-15 11:31:42.730090] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:03.401 Initializing NVMe Controllers 00:13:03.401 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:03.401 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:03.401 Initialization complete. Launching workers. 00:13:03.401 submit (in ns) avg, min, max = 8599.1, 3606.7, 4016976.7 00:13:03.401 complete (in ns) avg, min, max = 27343.3, 2073.3, 4029245.6 00:13:03.401 00:13:03.401 Submit histogram 00:13:03.401 ================ 00:13:03.401 Range in us Cumulative Count 00:13:03.401 3.603 - 3.627: 1.9229% ( 251) 00:13:03.401 3.627 - 3.650: 10.4803% ( 1117) 00:13:03.401 3.650 - 3.674: 22.2248% ( 1533) 00:13:03.401 3.674 - 3.698: 33.4023% ( 1459) 00:13:03.401 3.698 - 3.721: 41.4387% ( 1049) 00:13:03.401 3.721 - 3.745: 46.8705% ( 709) 00:13:03.401 3.745 - 3.769: 52.5320% ( 739) 00:13:03.401 3.769 - 3.793: 58.0939% ( 726) 00:13:03.401 3.793 - 3.816: 63.1502% ( 660) 00:13:03.401 3.816 - 3.840: 66.6130% ( 452) 00:13:03.401 3.840 - 3.864: 69.5626% ( 385) 00:13:03.401 3.864 - 3.887: 73.3012% ( 488) 00:13:03.401 3.887 - 3.911: 77.5760% ( 558) 00:13:03.401 3.911 - 3.935: 82.2186% ( 606) 00:13:03.401 3.935 - 3.959: 85.2448% ( 395) 00:13:03.401 3.959 - 3.982: 87.1371% ( 247) 00:13:03.401 3.982 - 4.006: 88.8225% ( 220) 00:13:03.401 4.006 - 4.030: 90.3087% ( 194) 00:13:03.401 4.030 - 4.053: 91.9329% ( 212) 00:13:03.402 4.053 - 4.077: 93.0821% ( 150) 00:13:03.402 4.077 - 4.101: 93.9324% ( 111) 00:13:03.402 4.101 - 4.124: 94.6296% ( 91) 00:13:03.402 4.124 - 4.148: 95.2655% ( 83) 00:13:03.402 4.148 - 4.172: 95.7404% ( 62) 00:13:03.402 4.172 - 4.196: 96.0086% ( 35) 00:13:03.402 4.196 - 4.219: 96.2308% ( 29) 00:13:03.402 4.219 - 4.243: 96.4070% ( 23) 00:13:03.402 4.243 - 4.267: 96.4836% ( 10) 00:13:03.402 4.267 - 4.290: 96.5602% ( 10) 00:13:03.402 4.290 - 4.314: 96.6981% ( 18) 00:13:03.402 4.314 - 4.338: 96.8053% ( 14) 00:13:03.402 4.338 - 4.361: 96.8896% ( 11) 00:13:03.402 4.361 - 4.385: 96.9892% ( 13) 00:13:03.402 4.385 - 4.409: 97.0198% ( 4) 00:13:03.402 4.409 - 4.433: 97.0428% ( 3) 00:13:03.402 4.433 - 4.456: 97.0581% ( 2) 00:13:03.402 4.456 - 4.480: 97.0658% ( 1) 00:13:03.402 4.480 - 4.504: 97.0811% ( 2) 00:13:03.402 4.551 - 4.575: 97.0888% ( 1) 00:13:03.402 4.693 - 4.717: 97.0965% ( 1) 00:13:03.402 4.717 - 4.741: 97.1041% ( 1) 00:13:03.402 4.741 - 4.764: 97.1194% ( 2) 00:13:03.402 4.764 - 4.788: 97.1424% ( 3) 00:13:03.402 4.788 - 4.812: 97.1654% ( 3) 00:13:03.402 4.812 - 4.836: 97.1960% ( 4) 00:13:03.402 4.836 - 4.859: 97.2573% ( 8) 00:13:03.402 4.859 - 4.883: 97.2880% ( 4) 00:13:03.402 4.883 - 4.907: 97.3339% ( 6) 00:13:03.402 4.907 - 4.930: 97.3799% ( 6) 00:13:03.402 4.930 - 4.954: 97.4259% ( 6) 00:13:03.402 4.954 - 4.978: 97.4642% ( 5) 00:13:03.402 4.978 - 5.001: 97.5102% ( 6) 00:13:03.402 5.001 - 5.025: 97.5408% ( 4) 00:13:03.402 5.025 - 5.049: 97.5791% ( 5) 00:13:03.402 5.049 - 5.073: 97.6021% ( 3) 00:13:03.402 5.073 - 5.096: 97.6481% ( 6) 00:13:03.402 5.096 - 5.120: 97.7093% ( 8) 00:13:03.402 5.120 - 5.144: 97.7400% ( 4) 00:13:03.402 5.144 - 5.167: 97.7553% ( 2) 00:13:03.402 5.167 - 5.191: 97.7706% ( 2) 00:13:03.402 5.191 - 5.215: 97.7859% ( 2) 00:13:03.402 5.215 - 5.239: 97.8013% ( 2) 00:13:03.402 5.239 - 5.262: 97.8166% ( 2) 00:13:03.402 5.262 - 5.286: 97.8396% ( 3) 00:13:03.402 5.286 - 5.310: 97.8626% ( 3) 00:13:03.402 5.333 - 5.357: 97.8855% ( 3) 00:13:03.402 5.357 - 5.381: 97.8932% ( 1) 00:13:03.402 5.404 - 5.428: 97.9009% ( 1) 00:13:03.402 5.452 - 5.476: 97.9085% ( 1) 00:13:03.402 5.476 - 5.499: 97.9162% ( 1) 00:13:03.402 5.499 - 5.523: 97.9238% ( 1) 00:13:03.402 5.570 - 5.594: 97.9315% ( 1) 00:13:03.402 5.618 - 5.641: 97.9392% ( 1) 00:13:03.402 5.641 - 5.665: 97.9468% ( 1) 00:13:03.402 5.760 - 5.784: 97.9545% ( 1) 00:13:03.402 5.902 - 5.926: 97.9622% ( 1) 00:13:03.402 5.973 - 5.997: 97.9698% ( 1) 00:13:03.402 6.021 - 6.044: 97.9775% ( 1) 00:13:03.402 6.044 - 6.068: 97.9851% ( 1) 00:13:03.402 6.163 - 6.210: 97.9928% ( 1) 00:13:03.402 6.210 - 6.258: 98.0005% ( 1) 00:13:03.402 6.258 - 6.305: 98.0081% ( 1) 00:13:03.402 6.447 - 6.495: 98.0158% ( 1) 00:13:03.402 6.684 - 6.732: 98.0234% ( 1) 00:13:03.402 6.779 - 6.827: 98.0388% ( 2) 00:13:03.402 6.827 - 6.874: 98.0464% ( 1) 00:13:03.402 6.874 - 6.921: 98.0541% ( 1) 00:13:03.402 6.969 - 7.016: 98.0694% ( 2) 00:13:03.402 7.016 - 7.064: 98.0847% ( 2) 00:13:03.402 7.064 - 7.111: 98.1001% ( 2) 00:13:03.402 7.111 - 7.159: 98.1154% ( 2) 00:13:03.402 7.253 - 7.301: 98.1230% ( 1) 00:13:03.402 7.301 - 7.348: 98.1307% ( 1) 00:13:03.402 7.443 - 7.490: 98.1384% ( 1) 00:13:03.402 7.490 - 7.538: 98.1537% ( 2) 00:13:03.402 7.585 - 7.633: 98.1613% ( 1) 00:13:03.402 7.680 - 7.727: 98.1843% ( 3) 00:13:03.402 7.775 - 7.822: 98.1920% ( 1) 00:13:03.402 7.870 - 7.917: 98.1996% ( 1) 00:13:03.402 8.012 - 8.059: 98.2073% ( 1) 00:13:03.402 8.107 - 8.154: 98.2150% ( 1) 00:13:03.402 8.201 - 8.249: 98.2226% ( 1) 00:13:03.402 8.249 - 8.296: 98.2380% ( 2) 00:13:03.402 8.296 - 8.344: 98.2456% ( 1) 00:13:03.402 8.391 - 8.439: 98.2533% ( 1) 00:13:03.402 8.439 - 8.486: 98.2609% ( 1) 00:13:03.402 8.486 - 8.533: 98.2686% ( 1) 00:13:03.402 8.533 - 8.581: 98.2763% ( 1) 00:13:03.402 8.676 - 8.723: 98.2916% ( 2) 00:13:03.402 8.818 - 8.865: 98.3069% ( 2) 00:13:03.402 8.865 - 8.913: 98.3299% ( 3) 00:13:03.402 8.913 - 8.960: 98.3375% ( 1) 00:13:03.402 8.960 - 9.007: 98.3529% ( 2) 00:13:03.402 9.007 - 9.055: 98.3682% ( 2) 00:13:03.402 9.055 - 9.102: 98.3759% ( 1) 00:13:03.402 9.197 - 9.244: 98.3912% ( 2) 00:13:03.402 9.244 - 9.292: 98.4142% ( 3) 00:13:03.402 9.292 - 9.339: 98.4295% ( 2) 00:13:03.402 9.339 - 9.387: 98.4371% ( 1) 00:13:03.402 9.434 - 9.481: 98.4448% ( 1) 00:13:03.402 9.481 - 9.529: 98.4601% ( 2) 00:13:03.402 9.529 - 9.576: 98.4678% ( 1) 00:13:03.402 9.576 - 9.624: 98.4908% ( 3) 00:13:03.402 9.671 - 9.719: 98.4984% ( 1) 00:13:03.402 9.908 - 9.956: 98.5061% ( 1) 00:13:03.402 9.956 - 10.003: 98.5138% ( 1) 00:13:03.402 10.098 - 10.145: 98.5367% ( 3) 00:13:03.402 10.287 - 10.335: 98.5521% ( 2) 00:13:03.402 10.382 - 10.430: 98.5597% ( 1) 00:13:03.402 10.430 - 10.477: 98.5827% ( 3) 00:13:03.402 10.619 - 10.667: 98.5904% ( 1) 00:13:03.402 10.714 - 10.761: 98.5980% ( 1) 00:13:03.402 10.856 - 10.904: 98.6057% ( 1) 00:13:03.402 10.951 - 10.999: 98.6133% ( 1) 00:13:03.402 11.093 - 11.141: 98.6363% ( 3) 00:13:03.402 11.188 - 11.236: 98.6440% ( 1) 00:13:03.402 11.236 - 11.283: 98.6517% ( 1) 00:13:03.402 11.330 - 11.378: 98.6670% ( 2) 00:13:03.402 11.378 - 11.425: 98.6746% ( 1) 00:13:03.402 11.520 - 11.567: 98.6823% ( 1) 00:13:03.402 11.567 - 11.615: 98.6976% ( 2) 00:13:03.402 11.757 - 11.804: 98.7053% ( 1) 00:13:03.402 11.947 - 11.994: 98.7129% ( 1) 00:13:03.402 12.041 - 12.089: 98.7206% ( 1) 00:13:03.402 12.136 - 12.231: 98.7283% ( 1) 00:13:03.402 12.326 - 12.421: 98.7359% ( 1) 00:13:03.402 12.610 - 12.705: 98.7436% ( 1) 00:13:03.402 12.800 - 12.895: 98.7512% ( 1) 00:13:03.402 12.895 - 12.990: 98.7819% ( 4) 00:13:03.402 12.990 - 13.084: 98.7896% ( 1) 00:13:03.402 13.084 - 13.179: 98.8049% ( 2) 00:13:03.402 13.274 - 13.369: 98.8202% ( 2) 00:13:03.402 13.369 - 13.464: 98.8355% ( 2) 00:13:03.402 13.559 - 13.653: 98.8508% ( 2) 00:13:03.402 13.653 - 13.748: 98.8662% ( 2) 00:13:03.402 13.748 - 13.843: 98.8738% ( 1) 00:13:03.402 13.843 - 13.938: 98.8815% ( 1) 00:13:03.402 14.127 - 14.222: 98.8891% ( 1) 00:13:03.402 14.317 - 14.412: 98.8968% ( 1) 00:13:03.402 14.507 - 14.601: 98.9045% ( 1) 00:13:03.402 14.791 - 14.886: 98.9198% ( 2) 00:13:03.402 14.981 - 15.076: 98.9274% ( 1) 00:13:03.402 15.170 - 15.265: 98.9351% ( 1) 00:13:03.402 15.265 - 15.360: 98.9428% ( 1) 00:13:03.402 15.929 - 16.024: 98.9504% ( 1) 00:13:03.402 17.067 - 17.161: 98.9734% ( 3) 00:13:03.402 17.161 - 17.256: 98.9887% ( 2) 00:13:03.402 17.256 - 17.351: 98.9964% ( 1) 00:13:03.402 17.351 - 17.446: 99.0577% ( 8) 00:13:03.402 17.446 - 17.541: 99.0653% ( 1) 00:13:03.402 17.541 - 17.636: 99.1266% ( 8) 00:13:03.402 17.636 - 17.730: 99.1803% ( 7) 00:13:03.402 17.730 - 17.825: 99.2186% ( 5) 00:13:03.402 17.825 - 17.920: 99.2799% ( 8) 00:13:03.402 17.920 - 18.015: 99.3411% ( 8) 00:13:03.402 18.015 - 18.110: 99.3641% ( 3) 00:13:03.402 18.110 - 18.204: 99.4331% ( 9) 00:13:03.402 18.204 - 18.299: 99.4637% ( 4) 00:13:03.402 18.299 - 18.394: 99.5174% ( 7) 00:13:03.402 18.394 - 18.489: 99.5557% ( 5) 00:13:03.402 18.489 - 18.584: 99.6399% ( 11) 00:13:03.402 18.584 - 18.679: 99.7012% ( 8) 00:13:03.402 18.679 - 18.773: 99.7395% ( 5) 00:13:03.402 18.773 - 18.868: 99.7472% ( 1) 00:13:03.402 18.868 - 18.963: 99.7932% ( 6) 00:13:03.402 18.963 - 19.058: 99.8008% ( 1) 00:13:03.402 19.058 - 19.153: 99.8161% ( 2) 00:13:03.402 19.153 - 19.247: 99.8391% ( 3) 00:13:03.402 19.247 - 19.342: 99.8468% ( 1) 00:13:03.402 19.342 - 19.437: 99.8544% ( 1) 00:13:03.402 19.437 - 19.532: 99.8621% ( 1) 00:13:03.402 19.627 - 19.721: 99.8698% ( 1) 00:13:03.402 19.816 - 19.911: 99.8774% ( 1) 00:13:03.402 23.609 - 23.704: 99.8851% ( 1) 00:13:03.402 3980.705 - 4004.978: 99.9540% ( 9) 00:13:03.402 4004.978 - 4029.250: 100.0000% ( 6) 00:13:03.402 00:13:03.402 Complete histogram 00:13:03.402 ================== 00:13:03.402 Range in us Cumulative Count 00:13:03.402 2.062 - 2.074: 0.0153% ( 2) 00:13:03.402 2.074 - 2.086: 2.0302% ( 263) 00:13:03.402 2.086 - 2.098: 29.2959% ( 3559) 00:13:03.402 2.098 - 2.110: 45.4761% ( 2112) 00:13:03.403 2.110 - 2.121: 49.6055% ( 539) 00:13:03.403 2.121 - 2.133: 58.0480% ( 1102) 00:13:03.403 2.133 - 2.145: 60.8213% ( 362) 00:13:03.403 2.145 - 2.157: 64.0236% ( 418) 00:13:03.403 2.157 - 2.169: 75.8217% ( 1540) 00:13:03.403 2.169 - 2.181: 79.1542% ( 435) 00:13:03.403 2.181 - 2.193: 81.0388% ( 246) 00:13:03.403 2.193 - 2.204: 83.6359% ( 339) 00:13:03.403 2.204 - 2.216: 84.5553% ( 120) 00:13:03.403 2.216 - 2.228: 86.6084% ( 268) 00:13:03.403 2.228 - 2.240: 91.0289% ( 577) 00:13:03.403 2.240 - 2.252: 92.4232% ( 182) 00:13:03.403 2.252 - 2.264: 92.9365% ( 67) 00:13:03.403 2.264 - 2.276: 93.7869% ( 111) 00:13:03.403 2.276 - 2.287: 94.2312% ( 58) 00:13:03.403 2.287 - 2.299: 94.8747% ( 84) 00:13:03.403 2.299 - 2.311: 95.5796% ( 92) 00:13:03.403 2.311 - 2.323: 95.7941% ( 28) 00:13:03.403 2.323 - 2.335: 95.8707% ( 10) 00:13:03.403 2.335 - 2.347: 95.9396% ( 9) 00:13:03.403 2.347 - 2.359: 96.0392% ( 13) 00:13:03.403 2.359 - 2.370: 96.2308% ( 25) 00:13:03.403 2.370 - 2.382: 96.4912% ( 34) 00:13:03.403 2.382 - 2.394: 96.8207% ( 43) 00:13:03.403 2.394 - 2.406: 97.0965% ( 36) 00:13:03.403 2.406 - 2.418: 97.3110% ( 28) 00:13:03.403 2.418 - 2.430: 97.4642% ( 20) 00:13:03.403 2.430 - 2.441: 97.6251% ( 21) 00:13:03.403 2.441 - 2.453: 97.7476% ( 16) 00:13:03.403 2.453 - 2.465: 97.9162% ( 22) 00:13:03.403 2.465 - 2.477: 98.0158% ( 13) 00:13:03.403 2.477 - 2.489: 98.0847% ( 9) 00:13:03.403 2.489 - 2.501: 98.1460% ( 8) 00:13:03.403 2.501 - 2.513: 98.2226% ( 10) 00:13:03.403 2.513 - 2.524: 98.2916% ( 9) 00:13:03.403 2.524 - 2.536: 98.3146% ( 3) 00:13:03.403 2.536 - 2.548: 98.3299% ( 2) 00:13:03.403 2.548 - 2.560: 98.3452% ( 2) 00:13:03.403 2.560 - 2.572: 98.3759% ( 4) 00:13:03.403 2.572 - 2.584: 98.3835% ( 1) 00:13:03.403 2.596 - 2.607: 98.3912% ( 1) 00:13:03.403 2.643 - 2.655: 98.3988% ( 1) 00:13:03.403 2.714 - 2.726: 98.4065% ( 1) 00:13:03.403 2.738 - 2.750: 98.4142% ( 1) 00:13:03.403 2.797 - 2.809: 98.4218% ( 1) 00:13:03.403 2.939 - 2.951: 98.4295% ( 1) 00:13:03.403 3.556 - 3.579: 98.4371% ( 1) 00:13:03.403 3.603 - 3.627: 98.4448% ( 1) 00:13:03.403 3.627 - 3.650: 98.4601% ( 2) 00:13:03.403 3.650 - 3.674: 98.4678% ( 1) 00:13:03.403 3.674 - 3.698: 98.4754% ( 1) 00:13:03.661 3.698 - 3.721: 9[2024-11-15 11:31:43.831216] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:03.661 8.4831% ( 1) 00:13:03.661 3.721 - 3.745: 98.4908% ( 1) 00:13:03.661 3.793 - 3.816: 98.4984% ( 1) 00:13:03.661 3.864 - 3.887: 98.5138% ( 2) 00:13:03.661 3.887 - 3.911: 98.5214% ( 1) 00:13:03.661 3.911 - 3.935: 98.5367% ( 2) 00:13:03.661 3.935 - 3.959: 98.5444% ( 1) 00:13:03.661 3.982 - 4.006: 98.5597% ( 2) 00:13:03.661 4.030 - 4.053: 98.5674% ( 1) 00:13:03.661 4.101 - 4.124: 98.5827% ( 2) 00:13:03.661 4.124 - 4.148: 98.5904% ( 1) 00:13:03.661 4.267 - 4.290: 98.5980% ( 1) 00:13:03.661 4.314 - 4.338: 98.6057% ( 1) 00:13:03.661 6.068 - 6.116: 98.6133% ( 1) 00:13:03.661 6.495 - 6.542: 98.6210% ( 1) 00:13:03.661 6.637 - 6.684: 98.6287% ( 1) 00:13:03.661 6.827 - 6.874: 98.6363% ( 1) 00:13:03.661 6.969 - 7.016: 98.6517% ( 2) 00:13:03.661 7.253 - 7.301: 98.6593% ( 1) 00:13:03.661 7.396 - 7.443: 98.6746% ( 2) 00:13:03.661 7.443 - 7.490: 98.6823% ( 1) 00:13:03.661 7.538 - 7.585: 98.6900% ( 1) 00:13:03.661 7.680 - 7.727: 98.6976% ( 1) 00:13:03.661 7.964 - 8.012: 98.7053% ( 1) 00:13:03.661 8.059 - 8.107: 98.7129% ( 1) 00:13:03.661 9.387 - 9.434: 98.7206% ( 1) 00:13:03.661 9.624 - 9.671: 98.7283% ( 1) 00:13:03.661 10.809 - 10.856: 98.7359% ( 1) 00:13:03.661 11.662 - 11.710: 98.7436% ( 1) 00:13:03.661 14.412 - 14.507: 98.7512% ( 1) 00:13:03.661 15.550 - 15.644: 98.7589% ( 1) 00:13:03.661 15.644 - 15.739: 98.7819% ( 3) 00:13:03.661 15.739 - 15.834: 98.7896% ( 1) 00:13:03.661 15.834 - 15.929: 98.7972% ( 1) 00:13:03.661 15.929 - 16.024: 98.8279% ( 4) 00:13:03.661 16.024 - 16.119: 98.8662% ( 5) 00:13:03.661 16.119 - 16.213: 98.9198% ( 7) 00:13:03.661 16.213 - 16.308: 98.9734% ( 7) 00:13:03.661 16.308 - 16.403: 99.0117% ( 5) 00:13:03.661 16.403 - 16.498: 99.0577% ( 6) 00:13:03.661 16.498 - 16.593: 99.1037% ( 6) 00:13:03.661 16.593 - 16.687: 99.1496% ( 6) 00:13:03.661 16.782 - 16.877: 99.2109% ( 8) 00:13:03.661 16.877 - 16.972: 99.2569% ( 6) 00:13:03.661 16.972 - 17.067: 99.2645% ( 1) 00:13:03.661 17.067 - 17.161: 99.2722% ( 1) 00:13:03.661 17.256 - 17.351: 99.2875% ( 2) 00:13:03.661 17.351 - 17.446: 99.3028% ( 2) 00:13:03.661 17.446 - 17.541: 99.3105% ( 1) 00:13:03.661 17.541 - 17.636: 99.3182% ( 1) 00:13:03.661 18.015 - 18.110: 99.3258% ( 1) 00:13:03.661 18.110 - 18.204: 99.3335% ( 1) 00:13:03.661 18.299 - 18.394: 99.3488% ( 2) 00:13:03.661 18.489 - 18.584: 99.3565% ( 1) 00:13:03.661 21.523 - 21.618: 99.3641% ( 1) 00:13:03.661 30.341 - 30.530: 99.3718% ( 1) 00:13:03.661 3373.890 - 3398.163: 99.3795% ( 1) 00:13:03.661 3980.705 - 4004.978: 99.8315% ( 59) 00:13:03.661 4004.978 - 4029.250: 100.0000% ( 22) 00:13:03.661 00:13:03.661 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:03.661 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:03.661 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:03.661 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:03.661 11:31:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:03.920 [ 00:13:03.920 { 00:13:03.920 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:03.920 "subtype": "Discovery", 00:13:03.920 "listen_addresses": [], 00:13:03.920 "allow_any_host": true, 00:13:03.920 "hosts": [] 00:13:03.920 }, 00:13:03.920 { 00:13:03.920 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:03.920 "subtype": "NVMe", 00:13:03.920 "listen_addresses": [ 00:13:03.920 { 00:13:03.920 "trtype": "VFIOUSER", 00:13:03.920 "adrfam": "IPv4", 00:13:03.920 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:03.920 "trsvcid": "0" 00:13:03.920 } 00:13:03.920 ], 00:13:03.920 "allow_any_host": true, 00:13:03.920 "hosts": [], 00:13:03.920 "serial_number": "SPDK1", 00:13:03.920 "model_number": "SPDK bdev Controller", 00:13:03.920 "max_namespaces": 32, 00:13:03.920 "min_cntlid": 1, 00:13:03.920 "max_cntlid": 65519, 00:13:03.920 "namespaces": [ 00:13:03.920 { 00:13:03.920 "nsid": 1, 00:13:03.920 "bdev_name": "Malloc1", 00:13:03.920 "name": "Malloc1", 00:13:03.920 "nguid": "D1123A582C72492F9ADA54D1B44B83DE", 00:13:03.920 "uuid": "d1123a58-2c72-492f-9ada-54d1b44b83de" 00:13:03.920 }, 00:13:03.920 { 00:13:03.920 "nsid": 2, 00:13:03.920 "bdev_name": "Malloc3", 00:13:03.920 "name": "Malloc3", 00:13:03.920 "nguid": "09EF99DF172A43AEA5AA480D6BAE5828", 00:13:03.920 "uuid": "09ef99df-172a-43ae-a5aa-480d6bae5828" 00:13:03.920 } 00:13:03.920 ] 00:13:03.920 }, 00:13:03.920 { 00:13:03.920 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:03.920 "subtype": "NVMe", 00:13:03.920 "listen_addresses": [ 00:13:03.920 { 00:13:03.920 "trtype": "VFIOUSER", 00:13:03.920 "adrfam": "IPv4", 00:13:03.920 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:03.920 "trsvcid": "0" 00:13:03.920 } 00:13:03.920 ], 00:13:03.920 "allow_any_host": true, 00:13:03.920 "hosts": [], 00:13:03.920 "serial_number": "SPDK2", 00:13:03.920 "model_number": "SPDK bdev Controller", 00:13:03.920 "max_namespaces": 32, 00:13:03.920 "min_cntlid": 1, 00:13:03.920 "max_cntlid": 65519, 00:13:03.920 "namespaces": [ 00:13:03.920 { 00:13:03.920 "nsid": 1, 00:13:03.920 "bdev_name": "Malloc2", 00:13:03.920 "name": "Malloc2", 00:13:03.920 "nguid": "70E18F30551E4A2C907E00E709E9488B", 00:13:03.920 "uuid": "70e18f30-551e-4a2c-907e-00e709e9488b" 00:13:03.920 } 00:13:03.920 ] 00:13:03.920 } 00:13:03.920 ] 00:13:03.920 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:03.920 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2906162 00:13:03.920 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:03.920 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:03.920 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:03.920 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:03.920 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:03.920 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:03.920 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:03.920 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:04.179 [2024-11-15 11:31:44.346824] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:04.179 Malloc4 00:13:04.179 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:04.435 [2024-11-15 11:31:44.723678] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:04.435 11:31:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:04.435 Asynchronous Event Request test 00:13:04.435 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:04.435 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:04.435 Registering asynchronous event callbacks... 00:13:04.435 Starting namespace attribute notice tests for all controllers... 00:13:04.435 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:04.435 aer_cb - Changed Namespace 00:13:04.435 Cleaning up... 00:13:04.693 [ 00:13:04.693 { 00:13:04.693 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:04.693 "subtype": "Discovery", 00:13:04.693 "listen_addresses": [], 00:13:04.693 "allow_any_host": true, 00:13:04.693 "hosts": [] 00:13:04.693 }, 00:13:04.693 { 00:13:04.693 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:04.693 "subtype": "NVMe", 00:13:04.693 "listen_addresses": [ 00:13:04.693 { 00:13:04.693 "trtype": "VFIOUSER", 00:13:04.693 "adrfam": "IPv4", 00:13:04.693 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:04.693 "trsvcid": "0" 00:13:04.693 } 00:13:04.693 ], 00:13:04.693 "allow_any_host": true, 00:13:04.693 "hosts": [], 00:13:04.693 "serial_number": "SPDK1", 00:13:04.693 "model_number": "SPDK bdev Controller", 00:13:04.693 "max_namespaces": 32, 00:13:04.693 "min_cntlid": 1, 00:13:04.693 "max_cntlid": 65519, 00:13:04.693 "namespaces": [ 00:13:04.693 { 00:13:04.693 "nsid": 1, 00:13:04.693 "bdev_name": "Malloc1", 00:13:04.693 "name": "Malloc1", 00:13:04.693 "nguid": "D1123A582C72492F9ADA54D1B44B83DE", 00:13:04.693 "uuid": "d1123a58-2c72-492f-9ada-54d1b44b83de" 00:13:04.693 }, 00:13:04.693 { 00:13:04.693 "nsid": 2, 00:13:04.693 "bdev_name": "Malloc3", 00:13:04.693 "name": "Malloc3", 00:13:04.693 "nguid": "09EF99DF172A43AEA5AA480D6BAE5828", 00:13:04.693 "uuid": "09ef99df-172a-43ae-a5aa-480d6bae5828" 00:13:04.693 } 00:13:04.693 ] 00:13:04.693 }, 00:13:04.693 { 00:13:04.693 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:04.693 "subtype": "NVMe", 00:13:04.693 "listen_addresses": [ 00:13:04.693 { 00:13:04.693 "trtype": "VFIOUSER", 00:13:04.693 "adrfam": "IPv4", 00:13:04.693 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:04.693 "trsvcid": "0" 00:13:04.693 } 00:13:04.693 ], 00:13:04.693 "allow_any_host": true, 00:13:04.693 "hosts": [], 00:13:04.693 "serial_number": "SPDK2", 00:13:04.693 "model_number": "SPDK bdev Controller", 00:13:04.693 "max_namespaces": 32, 00:13:04.693 "min_cntlid": 1, 00:13:04.693 "max_cntlid": 65519, 00:13:04.693 "namespaces": [ 00:13:04.693 { 00:13:04.693 "nsid": 1, 00:13:04.693 "bdev_name": "Malloc2", 00:13:04.693 "name": "Malloc2", 00:13:04.693 "nguid": "70E18F30551E4A2C907E00E709E9488B", 00:13:04.693 "uuid": "70e18f30-551e-4a2c-907e-00e709e9488b" 00:13:04.693 }, 00:13:04.693 { 00:13:04.693 "nsid": 2, 00:13:04.693 "bdev_name": "Malloc4", 00:13:04.693 "name": "Malloc4", 00:13:04.693 "nguid": "3225748D5A81422CB10D1154DF253612", 00:13:04.693 "uuid": "3225748d-5a81-422c-b10d-1154df253612" 00:13:04.693 } 00:13:04.693 ] 00:13:04.693 } 00:13:04.693 ] 00:13:04.693 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2906162 00:13:04.693 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:04.693 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2900548 00:13:04.693 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2900548 ']' 00:13:04.693 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2900548 00:13:04.693 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:04.693 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.693 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2900548 00:13:04.693 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:04.693 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:04.693 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2900548' 00:13:04.693 killing process with pid 2900548 00:13:04.693 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2900548 00:13:04.693 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2900548 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2906304 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2906304' 00:13:05.264 Process pid: 2906304 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2906304 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2906304 ']' 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:05.264 [2024-11-15 11:31:45.442709] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:05.264 [2024-11-15 11:31:45.443732] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:13:05.264 [2024-11-15 11:31:45.443804] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.264 [2024-11-15 11:31:45.510704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.264 [2024-11-15 11:31:45.564558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.264 [2024-11-15 11:31:45.564613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.264 [2024-11-15 11:31:45.564627] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.264 [2024-11-15 11:31:45.564638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.264 [2024-11-15 11:31:45.564646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.264 [2024-11-15 11:31:45.566059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.264 [2024-11-15 11:31:45.566165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.264 [2024-11-15 11:31:45.566244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.264 [2024-11-15 11:31:45.566247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.264 [2024-11-15 11:31:45.649468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:05.264 [2024-11-15 11:31:45.649645] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:05.264 [2024-11-15 11:31:45.649962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:05.264 [2024-11-15 11:31:45.650616] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:05.264 [2024-11-15 11:31:45.650848] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:05.264 11:31:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:06.639 11:31:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:06.639 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:06.639 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:06.639 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:06.639 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:06.639 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:07.330 Malloc1 00:13:07.330 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:07.330 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:07.588 11:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:07.845 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:07.845 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:07.845 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:08.103 Malloc2 00:13:08.103 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:08.361 11:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:08.619 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:08.876 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:08.876 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2906304 00:13:08.876 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2906304 ']' 00:13:08.876 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2906304 00:13:08.876 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:08.876 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.877 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2906304 00:13:09.165 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.165 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.165 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2906304' 00:13:09.165 killing process with pid 2906304 00:13:09.165 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2906304 00:13:09.165 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2906304 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:09.451 00:13:09.451 real 0m53.522s 00:13:09.451 user 3m26.760s 00:13:09.451 sys 0m3.949s 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:09.451 ************************************ 00:13:09.451 END TEST nvmf_vfio_user 00:13:09.451 ************************************ 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:09.451 ************************************ 00:13:09.451 START TEST nvmf_vfio_user_nvme_compliance 00:13:09.451 ************************************ 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:09.451 * Looking for test storage... 00:13:09.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:09.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.451 --rc genhtml_branch_coverage=1 00:13:09.451 --rc genhtml_function_coverage=1 00:13:09.451 --rc genhtml_legend=1 00:13:09.451 --rc geninfo_all_blocks=1 00:13:09.451 --rc geninfo_unexecuted_blocks=1 00:13:09.451 00:13:09.451 ' 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:09.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.451 --rc genhtml_branch_coverage=1 00:13:09.451 --rc genhtml_function_coverage=1 00:13:09.451 --rc genhtml_legend=1 00:13:09.451 --rc geninfo_all_blocks=1 00:13:09.451 --rc geninfo_unexecuted_blocks=1 00:13:09.451 00:13:09.451 ' 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:09.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.451 --rc genhtml_branch_coverage=1 00:13:09.451 --rc genhtml_function_coverage=1 00:13:09.451 --rc genhtml_legend=1 00:13:09.451 --rc geninfo_all_blocks=1 00:13:09.451 --rc geninfo_unexecuted_blocks=1 00:13:09.451 00:13:09.451 ' 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:09.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.451 --rc genhtml_branch_coverage=1 00:13:09.451 --rc genhtml_function_coverage=1 00:13:09.451 --rc genhtml_legend=1 00:13:09.451 --rc geninfo_all_blocks=1 00:13:09.451 --rc geninfo_unexecuted_blocks=1 00:13:09.451 00:13:09.451 ' 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.451 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:09.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2906912 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2906912' 00:13:09.452 Process pid: 2906912 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2906912 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2906912 ']' 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.452 11:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:09.452 [2024-11-15 11:31:49.848989] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:13:09.452 [2024-11-15 11:31:49.849061] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.710 [2024-11-15 11:31:49.915050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:09.710 [2024-11-15 11:31:49.971206] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.710 [2024-11-15 11:31:49.971258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.710 [2024-11-15 11:31:49.971281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.710 [2024-11-15 11:31:49.971291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.710 [2024-11-15 11:31:49.971300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.710 [2024-11-15 11:31:49.972724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.710 [2024-11-15 11:31:49.972783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.710 [2024-11-15 11:31:49.972786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.710 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.710 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:13:09.710 11:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:11.081 malloc0 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.081 11:31:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:11.081 00:13:11.081 00:13:11.081 CUnit - A unit testing framework for C - Version 2.1-3 00:13:11.081 http://cunit.sourceforge.net/ 00:13:11.081 00:13:11.081 00:13:11.081 Suite: nvme_compliance 00:13:11.081 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-15 11:31:51.345698] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.081 [2024-11-15 11:31:51.347197] vfio_user.c: 800:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:11.081 [2024-11-15 11:31:51.347223] vfio_user.c:5503:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:11.081 [2024-11-15 11:31:51.347235] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:11.081 [2024-11-15 11:31:51.348722] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.081 passed 00:13:11.081 Test: admin_identify_ctrlr_verify_fused ...[2024-11-15 11:31:51.436323] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.081 [2024-11-15 11:31:51.439349] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.081 passed 00:13:11.340 Test: admin_identify_ns ...[2024-11-15 11:31:51.526182] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.340 [2024-11-15 11:31:51.585325] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:11.340 [2024-11-15 11:31:51.593320] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:11.340 [2024-11-15 11:31:51.614445] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.340 passed 00:13:11.340 Test: admin_get_features_mandatory_features ...[2024-11-15 11:31:51.696597] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.340 [2024-11-15 11:31:51.699634] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.340 passed 00:13:11.597 Test: admin_get_features_optional_features ...[2024-11-15 11:31:51.787173] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.597 [2024-11-15 11:31:51.790196] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.597 passed 00:13:11.597 Test: admin_set_features_number_of_queues ...[2024-11-15 11:31:51.871427] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.597 [2024-11-15 11:31:51.980406] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.597 passed 00:13:11.855 Test: admin_get_log_page_mandatory_logs ...[2024-11-15 11:31:52.063052] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.855 [2024-11-15 11:31:52.066074] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.855 passed 00:13:11.855 Test: admin_get_log_page_with_lpo ...[2024-11-15 11:31:52.147920] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:11.855 [2024-11-15 11:31:52.215331] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:11.855 [2024-11-15 11:31:52.228392] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:11.855 passed 00:13:12.113 Test: fabric_property_get ...[2024-11-15 11:31:52.312080] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.113 [2024-11-15 11:31:52.313396] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:12.113 [2024-11-15 11:31:52.315099] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.113 passed 00:13:12.113 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-15 11:31:52.400678] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.113 [2024-11-15 11:31:52.401966] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:12.113 [2024-11-15 11:31:52.403701] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.113 passed 00:13:12.113 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-15 11:31:52.485871] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.370 [2024-11-15 11:31:52.569324] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:12.370 [2024-11-15 11:31:52.585314] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:12.370 [2024-11-15 11:31:52.590441] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.370 passed 00:13:12.370 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-15 11:31:52.674177] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.370 [2024-11-15 11:31:52.675506] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:12.371 [2024-11-15 11:31:52.677199] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.371 passed 00:13:12.371 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-15 11:31:52.762449] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.628 [2024-11-15 11:31:52.839310] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:12.628 [2024-11-15 11:31:52.863318] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:12.628 [2024-11-15 11:31:52.868436] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.628 passed 00:13:12.628 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-15 11:31:52.953138] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.628 [2024-11-15 11:31:52.954465] vfio_user.c:2154:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:12.628 [2024-11-15 11:31:52.954507] vfio_user.c:2148:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:12.628 [2024-11-15 11:31:52.956167] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.628 passed 00:13:12.628 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-15 11:31:53.037500] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.886 [2024-11-15 11:31:53.130317] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:12.886 [2024-11-15 11:31:53.138314] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:12.886 [2024-11-15 11:31:53.146313] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:12.886 [2024-11-15 11:31:53.154316] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:12.886 [2024-11-15 11:31:53.183433] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:12.886 passed 00:13:12.886 Test: admin_create_io_sq_verify_pc ...[2024-11-15 11:31:53.267184] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:12.886 [2024-11-15 11:31:53.284328] vfio_user.c:2047:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:12.886 [2024-11-15 11:31:53.301570] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:13.144 passed 00:13:13.144 Test: admin_create_io_qp_max_qps ...[2024-11-15 11:31:53.384140] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:14.077 [2024-11-15 11:31:54.480319] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:13:14.645 [2024-11-15 11:31:54.864568] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:14.645 passed 00:13:14.645 Test: admin_create_io_sq_shared_cq ...[2024-11-15 11:31:54.947877] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:14.903 [2024-11-15 11:31:55.079325] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:14.903 [2024-11-15 11:31:55.116402] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:14.903 passed 00:13:14.903 00:13:14.903 Run Summary: Type Total Ran Passed Failed Inactive 00:13:14.903 suites 1 1 n/a 0 0 00:13:14.903 tests 18 18 18 0 0 00:13:14.903 asserts 360 360 360 0 n/a 00:13:14.903 00:13:14.903 Elapsed time = 1.563 seconds 00:13:14.903 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2906912 00:13:14.903 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2906912 ']' 00:13:14.903 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2906912 00:13:14.903 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:13:14.903 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.903 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2906912 00:13:14.903 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.903 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.903 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2906912' 00:13:14.903 killing process with pid 2906912 00:13:14.903 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2906912 00:13:14.903 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2906912 00:13:15.161 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:15.161 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:15.161 00:13:15.161 real 0m5.823s 00:13:15.161 user 0m16.323s 00:13:15.161 sys 0m0.578s 00:13:15.161 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.161 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:15.161 ************************************ 00:13:15.161 END TEST nvmf_vfio_user_nvme_compliance 00:13:15.161 ************************************ 00:13:15.161 11:31:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:15.161 11:31:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:15.161 11:31:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.161 11:31:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:15.161 ************************************ 00:13:15.161 START TEST nvmf_vfio_user_fuzz 00:13:15.161 ************************************ 00:13:15.161 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:15.161 * Looking for test storage... 00:13:15.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.161 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:15.161 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:13:15.161 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:15.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.422 --rc genhtml_branch_coverage=1 00:13:15.422 --rc genhtml_function_coverage=1 00:13:15.422 --rc genhtml_legend=1 00:13:15.422 --rc geninfo_all_blocks=1 00:13:15.422 --rc geninfo_unexecuted_blocks=1 00:13:15.422 00:13:15.422 ' 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:15.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.422 --rc genhtml_branch_coverage=1 00:13:15.422 --rc genhtml_function_coverage=1 00:13:15.422 --rc genhtml_legend=1 00:13:15.422 --rc geninfo_all_blocks=1 00:13:15.422 --rc geninfo_unexecuted_blocks=1 00:13:15.422 00:13:15.422 ' 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:15.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.422 --rc genhtml_branch_coverage=1 00:13:15.422 --rc genhtml_function_coverage=1 00:13:15.422 --rc genhtml_legend=1 00:13:15.422 --rc geninfo_all_blocks=1 00:13:15.422 --rc geninfo_unexecuted_blocks=1 00:13:15.422 00:13:15.422 ' 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:15.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.422 --rc genhtml_branch_coverage=1 00:13:15.422 --rc genhtml_function_coverage=1 00:13:15.422 --rc genhtml_legend=1 00:13:15.422 --rc geninfo_all_blocks=1 00:13:15.422 --rc geninfo_unexecuted_blocks=1 00:13:15.422 00:13:15.422 ' 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.422 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:15.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2907646 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2907646' 00:13:15.423 Process pid: 2907646 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2907646 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2907646 ']' 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.423 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:15.682 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.682 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:13:15.682 11:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:16.618 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:16.618 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.618 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:16.618 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.618 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:16.618 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:16.618 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.618 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:16.618 malloc0 00:13:16.618 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.618 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:16.618 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.618 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:16.877 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.877 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:16.877 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.877 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:16.877 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.877 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:16.877 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.877 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:16.877 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.877 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:16.877 11:31:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:48.952 Fuzzing completed. Shutting down the fuzz application 00:13:48.952 00:13:48.952 Dumping successful admin opcodes: 00:13:48.952 8, 9, 10, 24, 00:13:48.952 Dumping successful io opcodes: 00:13:48.952 0, 00:13:48.952 NS: 0x20000081ef00 I/O qp, Total commands completed: 724169, total successful commands: 2820, random_seed: 4034579072 00:13:48.952 NS: 0x20000081ef00 admin qp, Total commands completed: 92650, total successful commands: 749, random_seed: 308935616 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2907646 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2907646 ']' 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2907646 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2907646 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2907646' 00:13:48.952 killing process with pid 2907646 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2907646 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2907646 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:48.952 00:13:48.952 real 0m32.278s 00:13:48.952 user 0m34.210s 00:13:48.952 sys 0m27.388s 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:48.952 ************************************ 00:13:48.952 END TEST nvmf_vfio_user_fuzz 00:13:48.952 ************************************ 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:48.952 ************************************ 00:13:48.952 START TEST nvmf_auth_target 00:13:48.952 ************************************ 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:48.952 * Looking for test storage... 00:13:48.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.952 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:48.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.953 --rc genhtml_branch_coverage=1 00:13:48.953 --rc genhtml_function_coverage=1 00:13:48.953 --rc genhtml_legend=1 00:13:48.953 --rc geninfo_all_blocks=1 00:13:48.953 --rc geninfo_unexecuted_blocks=1 00:13:48.953 00:13:48.953 ' 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:48.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.953 --rc genhtml_branch_coverage=1 00:13:48.953 --rc genhtml_function_coverage=1 00:13:48.953 --rc genhtml_legend=1 00:13:48.953 --rc geninfo_all_blocks=1 00:13:48.953 --rc geninfo_unexecuted_blocks=1 00:13:48.953 00:13:48.953 ' 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:48.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.953 --rc genhtml_branch_coverage=1 00:13:48.953 --rc genhtml_function_coverage=1 00:13:48.953 --rc genhtml_legend=1 00:13:48.953 --rc geninfo_all_blocks=1 00:13:48.953 --rc geninfo_unexecuted_blocks=1 00:13:48.953 00:13:48.953 ' 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:48.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.953 --rc genhtml_branch_coverage=1 00:13:48.953 --rc genhtml_function_coverage=1 00:13:48.953 --rc genhtml_legend=1 00:13:48.953 --rc geninfo_all_blocks=1 00:13:48.953 --rc geninfo_unexecuted_blocks=1 00:13:48.953 00:13:48.953 ' 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.953 11:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:48.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:13:48.953 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:48.954 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.954 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:48.954 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:48.954 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:48.954 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.954 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.954 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.954 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:48.954 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:48.954 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:48.954 11:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:49.888 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:49.888 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:49.888 Found net devices under 0000:09:00.0: cvl_0_0 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:49.888 Found net devices under 0000:09:00.1: cvl_0_1 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:49.888 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:49.889 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:50.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:13:50.147 00:13:50.147 --- 10.0.0.2 ping statistics --- 00:13:50.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.147 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:13:50.147 00:13:50.147 --- 10.0.0.1 ping statistics --- 00:13:50.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.147 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2913097 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2913097 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2913097 ']' 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.147 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2913122 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a0a7dd9b8ba988bc0be8d50d7fdc05c09d41a2a1462a1f23 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.7WX 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a0a7dd9b8ba988bc0be8d50d7fdc05c09d41a2a1462a1f23 0 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a0a7dd9b8ba988bc0be8d50d7fdc05c09d41a2a1462a1f23 0 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a0a7dd9b8ba988bc0be8d50d7fdc05c09d41a2a1462a1f23 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.7WX 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.7WX 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.7WX 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0d54f53b329f4e553dc142c289e6e0ad60ab6b771428e7fb3854bec790e85f77 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.3a3 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0d54f53b329f4e553dc142c289e6e0ad60ab6b771428e7fb3854bec790e85f77 3 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0d54f53b329f4e553dc142c289e6e0ad60ab6b771428e7fb3854bec790e85f77 3 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0d54f53b329f4e553dc142c289e6e0ad60ab6b771428e7fb3854bec790e85f77 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:50.406 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.3a3 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.3a3 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.3a3 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e4cbc5ec460b5564ee116dcfeac7f716 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.nOT 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e4cbc5ec460b5564ee116dcfeac7f716 1 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e4cbc5ec460b5564ee116dcfeac7f716 1 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e4cbc5ec460b5564ee116dcfeac7f716 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:50.407 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.nOT 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.nOT 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.nOT 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f4dbf0e2c08ba9727041925afa91b583ffee1baa5f10f04c 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.XpO 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f4dbf0e2c08ba9727041925afa91b583ffee1baa5f10f04c 2 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f4dbf0e2c08ba9727041925afa91b583ffee1baa5f10f04c 2 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f4dbf0e2c08ba9727041925afa91b583ffee1baa5f10f04c 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.XpO 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.XpO 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.XpO 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:50.666 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b9a2f32333f6489042f4393a87c4e1facd3ab6535e194251 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.7rw 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b9a2f32333f6489042f4393a87c4e1facd3ab6535e194251 2 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b9a2f32333f6489042f4393a87c4e1facd3ab6535e194251 2 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b9a2f32333f6489042f4393a87c4e1facd3ab6535e194251 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.7rw 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.7rw 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.7rw 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c4be507b08182fcef5b028f24122d567 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.o35 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c4be507b08182fcef5b028f24122d567 1 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c4be507b08182fcef5b028f24122d567 1 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c4be507b08182fcef5b028f24122d567 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.o35 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.o35 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.o35 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=74e7ae193089495a313dba7f67027ab2e2c189f4914d69b4890c7d873a0d1775 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.iii 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 74e7ae193089495a313dba7f67027ab2e2c189f4914d69b4890c7d873a0d1775 3 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 74e7ae193089495a313dba7f67027ab2e2c189f4914d69b4890c7d873a0d1775 3 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=74e7ae193089495a313dba7f67027ab2e2c189f4914d69b4890c7d873a0d1775 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:50.667 11:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:50.667 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.iii 00:13:50.667 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.iii 00:13:50.667 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.iii 00:13:50.667 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:13:50.667 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2913097 00:13:50.667 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2913097 ']' 00:13:50.667 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.667 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.667 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.667 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.667 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.926 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.926 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:50.926 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2913122 /var/tmp/host.sock 00:13:50.926 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2913122 ']' 00:13:50.926 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:50.926 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.926 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:50.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:50.926 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.926 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.185 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.185 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:51.185 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:13:51.185 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.185 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.442 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.442 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:51.442 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.7WX 00:13:51.442 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.442 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.442 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.442 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.7WX 00:13:51.442 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.7WX 00:13:51.700 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.3a3 ]] 00:13:51.700 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3a3 00:13:51.700 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.700 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.700 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.700 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3a3 00:13:51.700 11:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3a3 00:13:51.958 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:51.958 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.nOT 00:13:51.958 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.958 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.958 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.958 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.nOT 00:13:51.958 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.nOT 00:13:52.216 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.XpO ]] 00:13:52.216 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XpO 00:13:52.216 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.216 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.216 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.216 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XpO 00:13:52.216 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XpO 00:13:52.474 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:52.474 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.7rw 00:13:52.474 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.474 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.474 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.474 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.7rw 00:13:52.474 11:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.7rw 00:13:52.734 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.o35 ]] 00:13:52.734 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.o35 00:13:52.734 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.734 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.734 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.734 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.o35 00:13:52.735 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.o35 00:13:52.993 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:52.993 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.iii 00:13:52.993 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.993 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.993 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.993 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.iii 00:13:52.993 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.iii 00:13:53.252 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:13:53.252 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:53.252 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:53.252 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.252 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:53.252 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:53.509 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:13:53.509 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:53.509 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:53.509 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:53.509 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:53.510 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.510 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.510 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.510 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.510 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.510 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.510 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.510 11:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.076 00:13:54.076 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.076 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.076 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.076 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.076 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.076 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.076 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.076 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.076 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:54.076 { 00:13:54.076 "cntlid": 1, 00:13:54.076 "qid": 0, 00:13:54.076 "state": "enabled", 00:13:54.076 "thread": "nvmf_tgt_poll_group_000", 00:13:54.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:13:54.076 "listen_address": { 00:13:54.076 "trtype": "TCP", 00:13:54.076 "adrfam": "IPv4", 00:13:54.076 "traddr": "10.0.0.2", 00:13:54.076 "trsvcid": "4420" 00:13:54.076 }, 00:13:54.076 "peer_address": { 00:13:54.076 "trtype": "TCP", 00:13:54.076 "adrfam": "IPv4", 00:13:54.076 "traddr": "10.0.0.1", 00:13:54.076 "trsvcid": "45734" 00:13:54.076 }, 00:13:54.076 "auth": { 00:13:54.076 "state": "completed", 00:13:54.076 "digest": "sha256", 00:13:54.076 "dhgroup": "null" 00:13:54.076 } 00:13:54.076 } 00:13:54.076 ]' 00:13:54.076 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:54.334 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:54.334 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.334 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:54.334 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:54.334 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.334 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.334 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.591 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:13:54.591 11:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:13:55.523 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.523 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:55.523 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.523 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.523 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.523 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:55.523 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:55.523 11:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:55.780 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:13:55.780 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:55.780 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:55.780 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:55.780 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:55.780 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.780 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.780 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.780 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.780 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.780 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.780 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.780 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.038 00:13:56.038 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.038 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.038 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.296 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.296 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.296 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.296 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.296 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.296 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:56.296 { 00:13:56.296 "cntlid": 3, 00:13:56.296 "qid": 0, 00:13:56.296 "state": "enabled", 00:13:56.296 "thread": "nvmf_tgt_poll_group_000", 00:13:56.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:13:56.296 "listen_address": { 00:13:56.296 "trtype": "TCP", 00:13:56.296 "adrfam": "IPv4", 00:13:56.296 "traddr": "10.0.0.2", 00:13:56.296 "trsvcid": "4420" 00:13:56.296 }, 00:13:56.296 "peer_address": { 00:13:56.296 "trtype": "TCP", 00:13:56.296 "adrfam": "IPv4", 00:13:56.296 "traddr": "10.0.0.1", 00:13:56.296 "trsvcid": "46832" 00:13:56.296 }, 00:13:56.296 "auth": { 00:13:56.296 "state": "completed", 00:13:56.296 "digest": "sha256", 00:13:56.296 "dhgroup": "null" 00:13:56.296 } 00:13:56.296 } 00:13:56.296 ]' 00:13:56.296 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.554 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:56.554 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:56.554 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:56.554 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:56.554 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.554 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.554 11:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.812 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:13:56.812 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:13:57.746 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.746 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:57.746 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.746 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.746 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.746 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.746 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:57.746 11:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:58.004 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:13:58.004 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.004 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:58.004 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:58.004 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:58.004 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.004 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.004 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.004 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.004 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.004 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.004 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.004 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.262 00:13:58.262 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.262 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.262 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.520 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.520 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.520 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.520 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.520 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.520 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:58.520 { 00:13:58.520 "cntlid": 5, 00:13:58.520 "qid": 0, 00:13:58.520 "state": "enabled", 00:13:58.520 "thread": "nvmf_tgt_poll_group_000", 00:13:58.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:13:58.520 "listen_address": { 00:13:58.520 "trtype": "TCP", 00:13:58.520 "adrfam": "IPv4", 00:13:58.520 "traddr": "10.0.0.2", 00:13:58.520 "trsvcid": "4420" 00:13:58.520 }, 00:13:58.520 "peer_address": { 00:13:58.520 "trtype": "TCP", 00:13:58.520 "adrfam": "IPv4", 00:13:58.520 "traddr": "10.0.0.1", 00:13:58.520 "trsvcid": "46868" 00:13:58.520 }, 00:13:58.520 "auth": { 00:13:58.520 "state": "completed", 00:13:58.520 "digest": "sha256", 00:13:58.520 "dhgroup": "null" 00:13:58.520 } 00:13:58.520 } 00:13:58.520 ]' 00:13:58.520 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:58.520 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:58.520 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:58.777 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:58.777 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:58.777 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.777 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.777 11:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.036 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:13:59.036 11:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:13:59.969 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.969 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:59.969 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.969 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.969 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.969 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:59.969 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:59.969 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:00.227 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:00.227 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:00.227 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:00.227 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:00.227 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:00.227 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.227 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:00.227 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.227 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.227 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.227 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:00.228 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:00.228 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:00.485 00:14:00.485 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:00.485 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:00.485 11:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.743 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.743 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.743 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.743 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.743 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.743 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:00.743 { 00:14:00.743 "cntlid": 7, 00:14:00.743 "qid": 0, 00:14:00.743 "state": "enabled", 00:14:00.743 "thread": "nvmf_tgt_poll_group_000", 00:14:00.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:00.743 "listen_address": { 00:14:00.743 "trtype": "TCP", 00:14:00.743 "adrfam": "IPv4", 00:14:00.743 "traddr": "10.0.0.2", 00:14:00.743 "trsvcid": "4420" 00:14:00.743 }, 00:14:00.743 "peer_address": { 00:14:00.743 "trtype": "TCP", 00:14:00.743 "adrfam": "IPv4", 00:14:00.743 "traddr": "10.0.0.1", 00:14:00.743 "trsvcid": "46900" 00:14:00.743 }, 00:14:00.743 "auth": { 00:14:00.743 "state": "completed", 00:14:00.743 "digest": "sha256", 00:14:00.743 "dhgroup": "null" 00:14:00.743 } 00:14:00.743 } 00:14:00.743 ]' 00:14:00.743 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:00.743 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:00.743 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:00.743 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:00.743 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.001 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.001 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.001 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.259 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:14:01.259 11:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:14:02.191 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.191 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:02.191 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.191 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.191 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.191 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:02.191 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.191 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:02.191 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:02.449 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:02.449 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.449 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:02.449 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:02.449 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:02.449 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.449 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.449 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.449 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.449 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.449 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.449 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.449 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.707 00:14:02.707 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:02.707 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.707 11:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:02.965 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.965 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.965 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.965 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.965 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.965 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:02.965 { 00:14:02.965 "cntlid": 9, 00:14:02.965 "qid": 0, 00:14:02.965 "state": "enabled", 00:14:02.965 "thread": "nvmf_tgt_poll_group_000", 00:14:02.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:02.965 "listen_address": { 00:14:02.965 "trtype": "TCP", 00:14:02.965 "adrfam": "IPv4", 00:14:02.965 "traddr": "10.0.0.2", 00:14:02.965 "trsvcid": "4420" 00:14:02.965 }, 00:14:02.965 "peer_address": { 00:14:02.965 "trtype": "TCP", 00:14:02.965 "adrfam": "IPv4", 00:14:02.965 "traddr": "10.0.0.1", 00:14:02.965 "trsvcid": "46940" 00:14:02.965 }, 00:14:02.965 "auth": { 00:14:02.965 "state": "completed", 00:14:02.965 "digest": "sha256", 00:14:02.965 "dhgroup": "ffdhe2048" 00:14:02.965 } 00:14:02.965 } 00:14:02.965 ]' 00:14:02.965 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:02.965 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:02.965 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:02.965 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:02.965 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:02.965 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.965 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.965 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.226 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:14:03.226 11:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:14:04.161 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.161 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:04.161 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.161 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.161 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.161 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.161 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:04.161 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:04.421 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:04.421 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.421 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:04.421 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:04.421 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:04.421 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.421 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.421 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.421 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.421 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.421 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.421 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.421 11:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.048 00:14:05.048 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.048 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.048 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.048 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.049 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.049 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.049 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.049 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.049 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.049 { 00:14:05.049 "cntlid": 11, 00:14:05.049 "qid": 0, 00:14:05.049 "state": "enabled", 00:14:05.049 "thread": "nvmf_tgt_poll_group_000", 00:14:05.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:05.049 "listen_address": { 00:14:05.049 "trtype": "TCP", 00:14:05.049 "adrfam": "IPv4", 00:14:05.049 "traddr": "10.0.0.2", 00:14:05.049 "trsvcid": "4420" 00:14:05.049 }, 00:14:05.049 "peer_address": { 00:14:05.049 "trtype": "TCP", 00:14:05.049 "adrfam": "IPv4", 00:14:05.049 "traddr": "10.0.0.1", 00:14:05.049 "trsvcid": "46974" 00:14:05.049 }, 00:14:05.049 "auth": { 00:14:05.049 "state": "completed", 00:14:05.049 "digest": "sha256", 00:14:05.049 "dhgroup": "ffdhe2048" 00:14:05.049 } 00:14:05.049 } 00:14:05.049 ]' 00:14:05.049 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.307 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.307 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.307 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:05.307 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.307 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.307 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.307 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.626 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:14:05.626 11:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.560 11:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.126 00:14:07.126 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.126 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.126 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.384 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.384 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.384 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.384 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.384 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.384 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.384 { 00:14:07.384 "cntlid": 13, 00:14:07.384 "qid": 0, 00:14:07.384 "state": "enabled", 00:14:07.384 "thread": "nvmf_tgt_poll_group_000", 00:14:07.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:07.384 "listen_address": { 00:14:07.384 "trtype": "TCP", 00:14:07.384 "adrfam": "IPv4", 00:14:07.384 "traddr": "10.0.0.2", 00:14:07.384 "trsvcid": "4420" 00:14:07.384 }, 00:14:07.384 "peer_address": { 00:14:07.384 "trtype": "TCP", 00:14:07.384 "adrfam": "IPv4", 00:14:07.384 "traddr": "10.0.0.1", 00:14:07.384 "trsvcid": "36440" 00:14:07.384 }, 00:14:07.384 "auth": { 00:14:07.384 "state": "completed", 00:14:07.384 "digest": "sha256", 00:14:07.384 "dhgroup": "ffdhe2048" 00:14:07.384 } 00:14:07.384 } 00:14:07.384 ]' 00:14:07.384 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.384 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.384 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.384 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:07.384 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:07.384 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.384 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.384 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.642 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:14:07.642 11:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:14:08.575 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.575 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:08.575 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.575 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.575 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.575 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:08.575 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:08.575 11:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:08.833 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:08.833 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.833 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:08.833 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:08.833 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:08.833 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.833 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:08.833 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.833 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.091 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.091 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:09.091 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:09.091 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:09.349 00:14:09.349 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:09.349 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.349 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.608 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.608 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.608 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.608 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.608 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.608 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.608 { 00:14:09.608 "cntlid": 15, 00:14:09.608 "qid": 0, 00:14:09.608 "state": "enabled", 00:14:09.608 "thread": "nvmf_tgt_poll_group_000", 00:14:09.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:09.608 "listen_address": { 00:14:09.608 "trtype": "TCP", 00:14:09.608 "adrfam": "IPv4", 00:14:09.608 "traddr": "10.0.0.2", 00:14:09.608 "trsvcid": "4420" 00:14:09.608 }, 00:14:09.608 "peer_address": { 00:14:09.608 "trtype": "TCP", 00:14:09.608 "adrfam": "IPv4", 00:14:09.608 "traddr": "10.0.0.1", 00:14:09.608 "trsvcid": "36454" 00:14:09.608 }, 00:14:09.608 "auth": { 00:14:09.608 "state": "completed", 00:14:09.608 "digest": "sha256", 00:14:09.608 "dhgroup": "ffdhe2048" 00:14:09.608 } 00:14:09.608 } 00:14:09.608 ]' 00:14:09.608 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.608 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:09.608 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.608 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:09.608 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.608 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.608 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.608 11:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.865 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:14:09.865 11:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:14:10.799 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.799 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:10.799 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.799 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.799 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.799 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.799 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.799 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:10.799 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:11.057 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:11.057 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.057 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:11.057 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:11.057 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:11.057 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.057 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.057 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.057 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.057 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.057 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.057 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.057 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.622 00:14:11.622 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.622 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.622 11:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.880 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.880 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.880 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.880 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.880 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.880 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.880 { 00:14:11.880 "cntlid": 17, 00:14:11.880 "qid": 0, 00:14:11.880 "state": "enabled", 00:14:11.880 "thread": "nvmf_tgt_poll_group_000", 00:14:11.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:11.880 "listen_address": { 00:14:11.880 "trtype": "TCP", 00:14:11.880 "adrfam": "IPv4", 00:14:11.880 "traddr": "10.0.0.2", 00:14:11.880 "trsvcid": "4420" 00:14:11.880 }, 00:14:11.880 "peer_address": { 00:14:11.880 "trtype": "TCP", 00:14:11.880 "adrfam": "IPv4", 00:14:11.880 "traddr": "10.0.0.1", 00:14:11.880 "trsvcid": "36494" 00:14:11.880 }, 00:14:11.880 "auth": { 00:14:11.880 "state": "completed", 00:14:11.880 "digest": "sha256", 00:14:11.880 "dhgroup": "ffdhe3072" 00:14:11.880 } 00:14:11.880 } 00:14:11.880 ]' 00:14:11.880 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.880 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.880 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.880 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:11.880 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.880 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.880 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.880 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.138 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:14:12.138 11:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:14:13.071 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.071 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:13.071 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.071 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.071 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.071 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.071 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:13.071 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:13.329 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:13.329 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.329 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:13.329 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:13.329 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:13.329 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.329 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.329 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.329 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.329 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.329 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.329 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.329 11:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.895 00:14:13.895 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.895 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.895 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.152 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.152 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.153 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.153 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.153 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.153 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.153 { 00:14:14.153 "cntlid": 19, 00:14:14.153 "qid": 0, 00:14:14.153 "state": "enabled", 00:14:14.153 "thread": "nvmf_tgt_poll_group_000", 00:14:14.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:14.153 "listen_address": { 00:14:14.153 "trtype": "TCP", 00:14:14.153 "adrfam": "IPv4", 00:14:14.153 "traddr": "10.0.0.2", 00:14:14.153 "trsvcid": "4420" 00:14:14.153 }, 00:14:14.153 "peer_address": { 00:14:14.153 "trtype": "TCP", 00:14:14.153 "adrfam": "IPv4", 00:14:14.153 "traddr": "10.0.0.1", 00:14:14.153 "trsvcid": "36520" 00:14:14.153 }, 00:14:14.153 "auth": { 00:14:14.153 "state": "completed", 00:14:14.153 "digest": "sha256", 00:14:14.153 "dhgroup": "ffdhe3072" 00:14:14.153 } 00:14:14.153 } 00:14:14.153 ]' 00:14:14.153 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.153 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.153 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.153 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:14.153 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.153 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.153 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.153 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.413 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:14:14.413 11:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:14:15.350 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.350 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:15.350 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.350 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.350 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.350 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.350 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:15.350 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:15.608 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:15.608 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.608 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:15.608 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:15.608 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:15.608 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.608 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.608 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.608 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.608 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.608 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.608 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.608 11:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.173 00:14:16.173 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.173 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.173 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.432 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.432 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.432 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.432 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.432 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.432 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.432 { 00:14:16.432 "cntlid": 21, 00:14:16.432 "qid": 0, 00:14:16.432 "state": "enabled", 00:14:16.432 "thread": "nvmf_tgt_poll_group_000", 00:14:16.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:16.432 "listen_address": { 00:14:16.432 "trtype": "TCP", 00:14:16.432 "adrfam": "IPv4", 00:14:16.432 "traddr": "10.0.0.2", 00:14:16.432 "trsvcid": "4420" 00:14:16.432 }, 00:14:16.432 "peer_address": { 00:14:16.432 "trtype": "TCP", 00:14:16.432 "adrfam": "IPv4", 00:14:16.432 "traddr": "10.0.0.1", 00:14:16.432 "trsvcid": "46592" 00:14:16.432 }, 00:14:16.432 "auth": { 00:14:16.432 "state": "completed", 00:14:16.432 "digest": "sha256", 00:14:16.432 "dhgroup": "ffdhe3072" 00:14:16.432 } 00:14:16.432 } 00:14:16.432 ]' 00:14:16.432 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.432 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.432 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.432 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:16.432 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.432 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.432 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.432 11:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.689 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:14:16.689 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:14:17.622 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.622 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:17.622 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.622 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.622 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.622 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.622 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:17.622 11:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:18.187 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:18.187 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.187 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:18.187 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:18.187 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:18.187 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.187 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:18.187 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.187 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.187 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.187 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:18.187 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:18.187 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:18.444 00:14:18.444 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.444 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.444 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.705 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.705 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.705 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.705 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.705 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.705 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.705 { 00:14:18.705 "cntlid": 23, 00:14:18.705 "qid": 0, 00:14:18.705 "state": "enabled", 00:14:18.705 "thread": "nvmf_tgt_poll_group_000", 00:14:18.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:18.705 "listen_address": { 00:14:18.705 "trtype": "TCP", 00:14:18.705 "adrfam": "IPv4", 00:14:18.705 "traddr": "10.0.0.2", 00:14:18.705 "trsvcid": "4420" 00:14:18.705 }, 00:14:18.705 "peer_address": { 00:14:18.705 "trtype": "TCP", 00:14:18.705 "adrfam": "IPv4", 00:14:18.705 "traddr": "10.0.0.1", 00:14:18.705 "trsvcid": "46622" 00:14:18.705 }, 00:14:18.705 "auth": { 00:14:18.705 "state": "completed", 00:14:18.705 "digest": "sha256", 00:14:18.705 "dhgroup": "ffdhe3072" 00:14:18.705 } 00:14:18.705 } 00:14:18.705 ]' 00:14:18.705 11:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.705 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.705 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.705 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:18.705 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.705 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.705 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.705 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.963 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:14:18.963 11:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:14:19.897 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.897 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:19.897 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.897 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.897 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.897 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:19.897 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:19.897 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:19.897 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:20.155 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:20.155 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.155 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:20.155 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:20.155 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:20.155 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.155 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.155 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.155 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.155 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.155 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.155 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.155 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.721 00:14:20.721 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.721 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.721 11:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.979 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.979 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.979 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.979 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.979 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.979 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.979 { 00:14:20.979 "cntlid": 25, 00:14:20.979 "qid": 0, 00:14:20.979 "state": "enabled", 00:14:20.979 "thread": "nvmf_tgt_poll_group_000", 00:14:20.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:20.979 "listen_address": { 00:14:20.979 "trtype": "TCP", 00:14:20.979 "adrfam": "IPv4", 00:14:20.979 "traddr": "10.0.0.2", 00:14:20.979 "trsvcid": "4420" 00:14:20.979 }, 00:14:20.979 "peer_address": { 00:14:20.979 "trtype": "TCP", 00:14:20.979 "adrfam": "IPv4", 00:14:20.979 "traddr": "10.0.0.1", 00:14:20.979 "trsvcid": "46650" 00:14:20.979 }, 00:14:20.979 "auth": { 00:14:20.979 "state": "completed", 00:14:20.979 "digest": "sha256", 00:14:20.979 "dhgroup": "ffdhe4096" 00:14:20.979 } 00:14:20.979 } 00:14:20.979 ]' 00:14:20.979 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:20.979 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.979 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.979 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:20.979 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.979 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.979 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.979 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.545 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:14:21.545 11:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:14:22.476 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.477 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:22.477 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.477 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.477 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.477 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.477 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:22.477 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:22.735 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:22.735 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.735 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:22.735 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:22.735 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:22.735 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.735 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.735 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.735 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.735 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.735 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.735 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.735 11:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.992 00:14:22.992 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.992 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.992 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.251 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.251 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.251 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.251 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.251 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.251 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.251 { 00:14:23.251 "cntlid": 27, 00:14:23.251 "qid": 0, 00:14:23.251 "state": "enabled", 00:14:23.251 "thread": "nvmf_tgt_poll_group_000", 00:14:23.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:23.251 "listen_address": { 00:14:23.251 "trtype": "TCP", 00:14:23.251 "adrfam": "IPv4", 00:14:23.251 "traddr": "10.0.0.2", 00:14:23.251 "trsvcid": "4420" 00:14:23.251 }, 00:14:23.251 "peer_address": { 00:14:23.251 "trtype": "TCP", 00:14:23.251 "adrfam": "IPv4", 00:14:23.251 "traddr": "10.0.0.1", 00:14:23.251 "trsvcid": "46666" 00:14:23.251 }, 00:14:23.251 "auth": { 00:14:23.251 "state": "completed", 00:14:23.251 "digest": "sha256", 00:14:23.251 "dhgroup": "ffdhe4096" 00:14:23.251 } 00:14:23.251 } 00:14:23.251 ]' 00:14:23.251 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.251 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:23.251 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.509 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:23.509 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.509 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.509 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.509 11:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.767 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:14:23.767 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:14:24.701 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.701 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:24.701 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.701 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.701 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.701 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.701 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:24.701 11:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:24.958 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:24.958 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:24.958 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:24.958 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:24.958 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:24.958 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.958 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.958 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.958 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.959 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.959 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.959 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.959 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.525 00:14:25.525 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.525 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.525 11:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.782 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.782 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.782 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.782 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.782 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.782 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.782 { 00:14:25.782 "cntlid": 29, 00:14:25.782 "qid": 0, 00:14:25.782 "state": "enabled", 00:14:25.782 "thread": "nvmf_tgt_poll_group_000", 00:14:25.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:25.782 "listen_address": { 00:14:25.782 "trtype": "TCP", 00:14:25.782 "adrfam": "IPv4", 00:14:25.782 "traddr": "10.0.0.2", 00:14:25.782 "trsvcid": "4420" 00:14:25.782 }, 00:14:25.782 "peer_address": { 00:14:25.782 "trtype": "TCP", 00:14:25.782 "adrfam": "IPv4", 00:14:25.782 "traddr": "10.0.0.1", 00:14:25.782 "trsvcid": "46678" 00:14:25.782 }, 00:14:25.782 "auth": { 00:14:25.782 "state": "completed", 00:14:25.782 "digest": "sha256", 00:14:25.782 "dhgroup": "ffdhe4096" 00:14:25.782 } 00:14:25.782 } 00:14:25.782 ]' 00:14:25.782 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.782 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.782 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.782 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:25.782 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.782 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.782 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.782 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.040 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:14:26.040 11:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:14:26.973 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.973 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:26.973 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.973 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.973 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.973 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.973 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:26.973 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:27.231 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:27.231 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.231 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:27.231 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:27.231 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:27.231 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.231 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:27.231 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.231 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.231 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.231 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:27.231 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.231 11:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.797 00:14:27.797 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.797 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.797 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.055 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.055 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.055 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.055 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.055 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.055 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.055 { 00:14:28.055 "cntlid": 31, 00:14:28.055 "qid": 0, 00:14:28.055 "state": "enabled", 00:14:28.055 "thread": "nvmf_tgt_poll_group_000", 00:14:28.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:28.055 "listen_address": { 00:14:28.055 "trtype": "TCP", 00:14:28.055 "adrfam": "IPv4", 00:14:28.055 "traddr": "10.0.0.2", 00:14:28.055 "trsvcid": "4420" 00:14:28.055 }, 00:14:28.055 "peer_address": { 00:14:28.055 "trtype": "TCP", 00:14:28.055 "adrfam": "IPv4", 00:14:28.055 "traddr": "10.0.0.1", 00:14:28.055 "trsvcid": "38522" 00:14:28.055 }, 00:14:28.055 "auth": { 00:14:28.055 "state": "completed", 00:14:28.056 "digest": "sha256", 00:14:28.056 "dhgroup": "ffdhe4096" 00:14:28.056 } 00:14:28.056 } 00:14:28.056 ]' 00:14:28.056 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.056 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.056 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.056 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:28.056 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.056 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.056 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.056 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.314 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:14:28.314 11:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:14:29.247 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.247 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:29.247 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.247 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.248 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.248 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:29.248 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.248 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:29.248 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:29.506 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:29.506 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.506 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:29.506 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:29.506 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:29.506 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.506 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.506 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.506 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.506 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.506 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.506 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.506 11:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.072 00:14:30.072 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.072 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.072 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.330 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.330 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.330 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.330 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.330 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.330 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.330 { 00:14:30.330 "cntlid": 33, 00:14:30.330 "qid": 0, 00:14:30.330 "state": "enabled", 00:14:30.330 "thread": "nvmf_tgt_poll_group_000", 00:14:30.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:30.330 "listen_address": { 00:14:30.330 "trtype": "TCP", 00:14:30.330 "adrfam": "IPv4", 00:14:30.330 "traddr": "10.0.0.2", 00:14:30.330 "trsvcid": "4420" 00:14:30.330 }, 00:14:30.330 "peer_address": { 00:14:30.330 "trtype": "TCP", 00:14:30.330 "adrfam": "IPv4", 00:14:30.330 "traddr": "10.0.0.1", 00:14:30.330 "trsvcid": "38548" 00:14:30.330 }, 00:14:30.330 "auth": { 00:14:30.330 "state": "completed", 00:14:30.330 "digest": "sha256", 00:14:30.330 "dhgroup": "ffdhe6144" 00:14:30.330 } 00:14:30.330 } 00:14:30.330 ]' 00:14:30.330 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.330 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.330 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.588 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:30.588 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.588 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.588 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.588 11:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.847 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:14:30.847 11:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:14:31.781 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.781 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:31.781 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.781 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.781 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.781 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.781 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:31.781 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:32.040 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:32.040 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.040 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:32.040 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:32.040 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:32.040 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.040 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.040 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.040 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.040 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.040 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.040 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.040 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.607 00:14:32.607 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.607 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.607 11:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.865 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.865 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.865 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.865 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.865 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.865 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.865 { 00:14:32.865 "cntlid": 35, 00:14:32.865 "qid": 0, 00:14:32.865 "state": "enabled", 00:14:32.865 "thread": "nvmf_tgt_poll_group_000", 00:14:32.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:32.865 "listen_address": { 00:14:32.865 "trtype": "TCP", 00:14:32.865 "adrfam": "IPv4", 00:14:32.865 "traddr": "10.0.0.2", 00:14:32.865 "trsvcid": "4420" 00:14:32.865 }, 00:14:32.865 "peer_address": { 00:14:32.865 "trtype": "TCP", 00:14:32.865 "adrfam": "IPv4", 00:14:32.865 "traddr": "10.0.0.1", 00:14:32.865 "trsvcid": "38570" 00:14:32.865 }, 00:14:32.865 "auth": { 00:14:32.865 "state": "completed", 00:14:32.865 "digest": "sha256", 00:14:32.865 "dhgroup": "ffdhe6144" 00:14:32.865 } 00:14:32.865 } 00:14:32.865 ]' 00:14:32.865 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.123 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.123 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.123 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:33.123 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.123 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.123 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.123 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.381 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:14:33.381 11:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:14:34.314 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.314 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:34.314 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.314 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.314 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.314 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.314 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:34.314 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:34.572 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:34.572 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.572 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:34.572 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:34.572 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:34.572 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.572 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.572 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.572 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.572 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.572 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.572 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.572 11:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.197 00:14:35.197 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.197 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.197 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.455 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.455 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.455 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.455 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.455 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.455 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.455 { 00:14:35.455 "cntlid": 37, 00:14:35.455 "qid": 0, 00:14:35.455 "state": "enabled", 00:14:35.455 "thread": "nvmf_tgt_poll_group_000", 00:14:35.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:35.455 "listen_address": { 00:14:35.455 "trtype": "TCP", 00:14:35.455 "adrfam": "IPv4", 00:14:35.455 "traddr": "10.0.0.2", 00:14:35.455 "trsvcid": "4420" 00:14:35.455 }, 00:14:35.455 "peer_address": { 00:14:35.455 "trtype": "TCP", 00:14:35.455 "adrfam": "IPv4", 00:14:35.455 "traddr": "10.0.0.1", 00:14:35.455 "trsvcid": "38602" 00:14:35.455 }, 00:14:35.455 "auth": { 00:14:35.455 "state": "completed", 00:14:35.455 "digest": "sha256", 00:14:35.455 "dhgroup": "ffdhe6144" 00:14:35.455 } 00:14:35.455 } 00:14:35.455 ]' 00:14:35.455 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.455 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.455 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.455 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:35.455 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.455 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.455 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.455 11:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.714 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:14:35.714 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:14:36.646 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.646 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:36.646 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.646 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.646 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.646 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.646 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:36.646 11:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:36.904 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:36.904 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.904 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:36.904 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:36.904 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:36.904 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.904 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:36.904 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.904 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.904 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.904 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:36.904 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:36.904 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:37.471 00:14:37.471 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.471 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.471 11:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.729 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.729 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.729 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.729 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.729 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.729 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.729 { 00:14:37.729 "cntlid": 39, 00:14:37.729 "qid": 0, 00:14:37.729 "state": "enabled", 00:14:37.729 "thread": "nvmf_tgt_poll_group_000", 00:14:37.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:37.729 "listen_address": { 00:14:37.729 "trtype": "TCP", 00:14:37.729 "adrfam": "IPv4", 00:14:37.729 "traddr": "10.0.0.2", 00:14:37.729 "trsvcid": "4420" 00:14:37.729 }, 00:14:37.729 "peer_address": { 00:14:37.729 "trtype": "TCP", 00:14:37.729 "adrfam": "IPv4", 00:14:37.729 "traddr": "10.0.0.1", 00:14:37.729 "trsvcid": "57250" 00:14:37.729 }, 00:14:37.729 "auth": { 00:14:37.729 "state": "completed", 00:14:37.729 "digest": "sha256", 00:14:37.729 "dhgroup": "ffdhe6144" 00:14:37.729 } 00:14:37.729 } 00:14:37.729 ]' 00:14:37.729 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.729 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.729 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.987 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:37.987 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.987 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.987 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.987 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.245 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:14:38.245 11:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:14:39.179 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.179 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:39.179 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.179 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.179 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.179 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:39.179 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.179 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:39.179 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:39.438 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:39.438 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.438 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:39.438 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:39.438 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:39.438 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.438 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.438 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.438 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.438 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.438 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.438 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.438 11:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.373 00:14:40.373 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.373 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.373 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.631 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.631 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.631 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.631 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.631 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.631 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.631 { 00:14:40.631 "cntlid": 41, 00:14:40.631 "qid": 0, 00:14:40.631 "state": "enabled", 00:14:40.631 "thread": "nvmf_tgt_poll_group_000", 00:14:40.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:40.631 "listen_address": { 00:14:40.631 "trtype": "TCP", 00:14:40.631 "adrfam": "IPv4", 00:14:40.631 "traddr": "10.0.0.2", 00:14:40.631 "trsvcid": "4420" 00:14:40.631 }, 00:14:40.631 "peer_address": { 00:14:40.631 "trtype": "TCP", 00:14:40.631 "adrfam": "IPv4", 00:14:40.631 "traddr": "10.0.0.1", 00:14:40.631 "trsvcid": "57280" 00:14:40.631 }, 00:14:40.631 "auth": { 00:14:40.631 "state": "completed", 00:14:40.631 "digest": "sha256", 00:14:40.631 "dhgroup": "ffdhe8192" 00:14:40.631 } 00:14:40.631 } 00:14:40.631 ]' 00:14:40.631 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.631 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.631 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.631 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:40.631 11:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.631 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.631 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.631 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.197 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:14:41.197 11:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:14:42.130 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.130 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:42.130 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.130 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.130 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.130 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.130 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:42.130 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:42.388 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:42.388 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.388 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:42.388 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:42.388 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:42.388 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.388 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.388 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.388 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.388 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.388 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.388 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.388 11:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.322 00:14:43.322 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.322 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.322 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.578 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.578 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.579 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.579 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.579 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.579 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.579 { 00:14:43.579 "cntlid": 43, 00:14:43.579 "qid": 0, 00:14:43.579 "state": "enabled", 00:14:43.579 "thread": "nvmf_tgt_poll_group_000", 00:14:43.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:43.579 "listen_address": { 00:14:43.579 "trtype": "TCP", 00:14:43.579 "adrfam": "IPv4", 00:14:43.579 "traddr": "10.0.0.2", 00:14:43.579 "trsvcid": "4420" 00:14:43.579 }, 00:14:43.579 "peer_address": { 00:14:43.579 "trtype": "TCP", 00:14:43.579 "adrfam": "IPv4", 00:14:43.579 "traddr": "10.0.0.1", 00:14:43.579 "trsvcid": "57298" 00:14:43.579 }, 00:14:43.579 "auth": { 00:14:43.579 "state": "completed", 00:14:43.579 "digest": "sha256", 00:14:43.579 "dhgroup": "ffdhe8192" 00:14:43.579 } 00:14:43.579 } 00:14:43.579 ]' 00:14:43.579 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.579 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.579 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.579 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:43.579 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.579 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.579 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.579 11:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.836 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:14:43.836 11:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:14:44.770 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.770 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:44.770 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.770 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.770 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.770 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.770 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:44.770 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:45.027 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:45.027 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.027 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:45.027 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:45.027 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:45.027 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.027 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.027 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.027 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.027 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.027 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.027 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.028 11:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.959 00:14:45.959 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.959 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.959 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.217 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.217 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.217 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.217 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.217 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.217 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.217 { 00:14:46.217 "cntlid": 45, 00:14:46.217 "qid": 0, 00:14:46.217 "state": "enabled", 00:14:46.217 "thread": "nvmf_tgt_poll_group_000", 00:14:46.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:46.217 "listen_address": { 00:14:46.217 "trtype": "TCP", 00:14:46.217 "adrfam": "IPv4", 00:14:46.217 "traddr": "10.0.0.2", 00:14:46.217 "trsvcid": "4420" 00:14:46.217 }, 00:14:46.217 "peer_address": { 00:14:46.217 "trtype": "TCP", 00:14:46.217 "adrfam": "IPv4", 00:14:46.217 "traddr": "10.0.0.1", 00:14:46.217 "trsvcid": "57328" 00:14:46.217 }, 00:14:46.217 "auth": { 00:14:46.217 "state": "completed", 00:14:46.217 "digest": "sha256", 00:14:46.217 "dhgroup": "ffdhe8192" 00:14:46.217 } 00:14:46.217 } 00:14:46.217 ]' 00:14:46.217 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.217 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.217 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.474 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:46.474 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.474 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.474 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.474 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.731 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:14:46.731 11:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:14:47.663 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.663 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:47.663 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.663 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.663 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.663 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.663 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:47.663 11:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:47.920 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:47.920 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.920 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:47.920 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:47.920 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:47.920 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.921 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:47.921 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.921 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.921 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.921 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:47.921 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:47.921 11:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:48.853 00:14:48.853 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.853 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.853 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.853 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.853 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.853 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.853 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.111 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.111 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.111 { 00:14:49.111 "cntlid": 47, 00:14:49.111 "qid": 0, 00:14:49.111 "state": "enabled", 00:14:49.111 "thread": "nvmf_tgt_poll_group_000", 00:14:49.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:49.111 "listen_address": { 00:14:49.111 "trtype": "TCP", 00:14:49.111 "adrfam": "IPv4", 00:14:49.111 "traddr": "10.0.0.2", 00:14:49.111 "trsvcid": "4420" 00:14:49.111 }, 00:14:49.111 "peer_address": { 00:14:49.111 "trtype": "TCP", 00:14:49.111 "adrfam": "IPv4", 00:14:49.111 "traddr": "10.0.0.1", 00:14:49.111 "trsvcid": "42346" 00:14:49.111 }, 00:14:49.111 "auth": { 00:14:49.111 "state": "completed", 00:14:49.111 "digest": "sha256", 00:14:49.111 "dhgroup": "ffdhe8192" 00:14:49.111 } 00:14:49.111 } 00:14:49.111 ]' 00:14:49.111 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.111 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.111 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.111 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:49.111 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.111 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.111 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.111 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.369 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:14:49.369 11:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:14:50.302 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.302 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:50.302 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.302 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.302 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.302 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:50.302 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:50.302 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.302 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:50.302 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:50.560 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:14:50.560 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.560 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:50.560 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:50.560 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:50.560 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.560 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.560 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.560 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.560 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.560 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.560 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.560 11:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.817 00:14:51.074 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.074 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.074 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.332 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.332 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.332 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.332 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.332 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.332 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.332 { 00:14:51.332 "cntlid": 49, 00:14:51.332 "qid": 0, 00:14:51.332 "state": "enabled", 00:14:51.332 "thread": "nvmf_tgt_poll_group_000", 00:14:51.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:51.332 "listen_address": { 00:14:51.332 "trtype": "TCP", 00:14:51.332 "adrfam": "IPv4", 00:14:51.332 "traddr": "10.0.0.2", 00:14:51.332 "trsvcid": "4420" 00:14:51.332 }, 00:14:51.332 "peer_address": { 00:14:51.332 "trtype": "TCP", 00:14:51.332 "adrfam": "IPv4", 00:14:51.332 "traddr": "10.0.0.1", 00:14:51.332 "trsvcid": "42372" 00:14:51.332 }, 00:14:51.332 "auth": { 00:14:51.332 "state": "completed", 00:14:51.332 "digest": "sha384", 00:14:51.332 "dhgroup": "null" 00:14:51.332 } 00:14:51.332 } 00:14:51.332 ]' 00:14:51.332 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.332 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.332 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.332 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:51.332 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.332 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.332 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.332 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.590 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:14:51.590 11:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:14:52.523 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.523 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:52.523 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.523 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.523 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.523 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.523 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:52.523 11:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:52.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:14:52.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:52.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:52.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:52.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.781 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.040 00:14:53.040 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.040 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.040 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.299 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.299 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.299 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.299 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.299 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.299 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.299 { 00:14:53.299 "cntlid": 51, 00:14:53.299 "qid": 0, 00:14:53.299 "state": "enabled", 00:14:53.299 "thread": "nvmf_tgt_poll_group_000", 00:14:53.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:53.299 "listen_address": { 00:14:53.299 "trtype": "TCP", 00:14:53.299 "adrfam": "IPv4", 00:14:53.299 "traddr": "10.0.0.2", 00:14:53.299 "trsvcid": "4420" 00:14:53.299 }, 00:14:53.299 "peer_address": { 00:14:53.299 "trtype": "TCP", 00:14:53.299 "adrfam": "IPv4", 00:14:53.299 "traddr": "10.0.0.1", 00:14:53.299 "trsvcid": "42412" 00:14:53.299 }, 00:14:53.299 "auth": { 00:14:53.299 "state": "completed", 00:14:53.299 "digest": "sha384", 00:14:53.299 "dhgroup": "null" 00:14:53.299 } 00:14:53.299 } 00:14:53.299 ]' 00:14:53.299 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.557 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:53.557 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.557 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:53.557 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.557 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.557 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.557 11:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.815 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:14:53.815 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:14:54.748 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.748 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:54.748 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.748 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.748 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.748 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.748 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:54.748 11:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:55.006 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:14:55.006 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.006 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:55.006 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:55.006 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:55.006 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.006 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.006 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.006 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.006 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.006 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.006 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.006 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.264 00:14:55.264 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.264 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.264 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.522 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.522 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.522 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.522 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.522 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.522 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.522 { 00:14:55.522 "cntlid": 53, 00:14:55.522 "qid": 0, 00:14:55.522 "state": "enabled", 00:14:55.522 "thread": "nvmf_tgt_poll_group_000", 00:14:55.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:55.522 "listen_address": { 00:14:55.522 "trtype": "TCP", 00:14:55.522 "adrfam": "IPv4", 00:14:55.522 "traddr": "10.0.0.2", 00:14:55.522 "trsvcid": "4420" 00:14:55.522 }, 00:14:55.522 "peer_address": { 00:14:55.522 "trtype": "TCP", 00:14:55.522 "adrfam": "IPv4", 00:14:55.522 "traddr": "10.0.0.1", 00:14:55.522 "trsvcid": "42440" 00:14:55.522 }, 00:14:55.522 "auth": { 00:14:55.522 "state": "completed", 00:14:55.522 "digest": "sha384", 00:14:55.522 "dhgroup": "null" 00:14:55.522 } 00:14:55.522 } 00:14:55.522 ]' 00:14:55.522 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.522 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:55.522 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.781 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:55.781 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.781 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.781 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.781 11:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.039 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:14:56.039 11:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:14:56.972 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.973 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:56.973 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.973 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.973 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.973 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.973 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:56.973 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:57.230 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:57.230 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.230 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:57.230 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:57.231 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:57.231 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.231 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:57.231 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.231 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.231 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.231 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:57.231 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:57.231 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:57.489 00:14:57.489 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.489 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.489 11:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.747 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.747 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.747 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.747 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.747 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.747 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.748 { 00:14:57.748 "cntlid": 55, 00:14:57.748 "qid": 0, 00:14:57.748 "state": "enabled", 00:14:57.748 "thread": "nvmf_tgt_poll_group_000", 00:14:57.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:57.748 "listen_address": { 00:14:57.748 "trtype": "TCP", 00:14:57.748 "adrfam": "IPv4", 00:14:57.748 "traddr": "10.0.0.2", 00:14:57.748 "trsvcid": "4420" 00:14:57.748 }, 00:14:57.748 "peer_address": { 00:14:57.748 "trtype": "TCP", 00:14:57.748 "adrfam": "IPv4", 00:14:57.748 "traddr": "10.0.0.1", 00:14:57.748 "trsvcid": "34122" 00:14:57.748 }, 00:14:57.748 "auth": { 00:14:57.748 "state": "completed", 00:14:57.748 "digest": "sha384", 00:14:57.748 "dhgroup": "null" 00:14:57.748 } 00:14:57.748 } 00:14:57.748 ]' 00:14:57.748 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.748 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:57.748 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.748 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:57.748 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.005 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.005 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.005 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.263 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:14:58.263 11:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:14:59.198 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.198 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:59.198 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.198 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.198 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.198 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:59.198 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.198 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:59.198 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:59.456 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:59.456 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.456 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:59.456 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:59.456 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:59.456 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.456 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.456 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.456 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.456 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.456 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.456 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.456 11:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.716 00:14:59.716 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.716 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.716 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.974 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.974 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.974 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.974 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.974 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.974 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.974 { 00:14:59.974 "cntlid": 57, 00:14:59.974 "qid": 0, 00:14:59.974 "state": "enabled", 00:14:59.974 "thread": "nvmf_tgt_poll_group_000", 00:14:59.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:59.974 "listen_address": { 00:14:59.974 "trtype": "TCP", 00:14:59.974 "adrfam": "IPv4", 00:14:59.974 "traddr": "10.0.0.2", 00:14:59.974 "trsvcid": "4420" 00:14:59.974 }, 00:14:59.974 "peer_address": { 00:14:59.974 "trtype": "TCP", 00:14:59.974 "adrfam": "IPv4", 00:14:59.974 "traddr": "10.0.0.1", 00:14:59.974 "trsvcid": "34136" 00:14:59.974 }, 00:14:59.974 "auth": { 00:14:59.974 "state": "completed", 00:14:59.974 "digest": "sha384", 00:14:59.974 "dhgroup": "ffdhe2048" 00:14:59.974 } 00:14:59.974 } 00:14:59.974 ]' 00:14:59.974 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.232 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:00.232 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.232 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:00.232 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.232 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.232 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.232 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.489 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:15:00.490 11:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:15:01.423 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.423 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:01.423 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.423 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.423 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.423 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.423 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:01.423 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:01.681 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:01.681 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.681 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:01.681 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:01.681 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:01.681 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.681 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.681 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.681 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.681 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.681 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.681 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.681 11:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.939 00:15:01.939 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.939 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.939 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.505 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.505 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.505 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.505 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.505 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.505 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.505 { 00:15:02.505 "cntlid": 59, 00:15:02.505 "qid": 0, 00:15:02.505 "state": "enabled", 00:15:02.505 "thread": "nvmf_tgt_poll_group_000", 00:15:02.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:02.505 "listen_address": { 00:15:02.505 "trtype": "TCP", 00:15:02.505 "adrfam": "IPv4", 00:15:02.505 "traddr": "10.0.0.2", 00:15:02.505 "trsvcid": "4420" 00:15:02.505 }, 00:15:02.505 "peer_address": { 00:15:02.505 "trtype": "TCP", 00:15:02.505 "adrfam": "IPv4", 00:15:02.505 "traddr": "10.0.0.1", 00:15:02.505 "trsvcid": "34150" 00:15:02.505 }, 00:15:02.505 "auth": { 00:15:02.505 "state": "completed", 00:15:02.505 "digest": "sha384", 00:15:02.505 "dhgroup": "ffdhe2048" 00:15:02.505 } 00:15:02.505 } 00:15:02.505 ]' 00:15:02.505 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.505 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.505 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.505 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:02.505 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.505 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.505 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.505 11:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.762 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:15:02.762 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:15:03.697 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.697 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:03.697 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.697 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.697 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.697 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.697 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:03.697 11:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:03.955 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:03.955 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.955 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:03.955 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:03.955 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:03.955 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.955 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.955 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.955 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.955 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.955 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.955 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.955 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.213 00:15:04.213 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.213 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.213 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.471 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.471 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.471 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.471 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.471 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.471 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.471 { 00:15:04.471 "cntlid": 61, 00:15:04.471 "qid": 0, 00:15:04.471 "state": "enabled", 00:15:04.471 "thread": "nvmf_tgt_poll_group_000", 00:15:04.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:04.471 "listen_address": { 00:15:04.471 "trtype": "TCP", 00:15:04.471 "adrfam": "IPv4", 00:15:04.471 "traddr": "10.0.0.2", 00:15:04.471 "trsvcid": "4420" 00:15:04.471 }, 00:15:04.471 "peer_address": { 00:15:04.471 "trtype": "TCP", 00:15:04.471 "adrfam": "IPv4", 00:15:04.471 "traddr": "10.0.0.1", 00:15:04.471 "trsvcid": "34172" 00:15:04.471 }, 00:15:04.471 "auth": { 00:15:04.471 "state": "completed", 00:15:04.471 "digest": "sha384", 00:15:04.471 "dhgroup": "ffdhe2048" 00:15:04.471 } 00:15:04.471 } 00:15:04.471 ]' 00:15:04.471 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.751 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.751 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.751 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:04.751 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.751 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.751 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.751 11:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.027 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:15:05.027 11:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:15:05.961 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.961 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:05.961 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.961 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.961 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.961 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.961 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:05.961 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:06.220 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:06.220 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.220 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:06.220 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:06.220 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:06.220 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.220 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:06.220 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.220 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.220 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.220 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:06.220 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.220 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.479 00:15:06.479 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.479 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.479 11:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.737 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.737 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.737 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.737 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.737 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.737 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.737 { 00:15:06.737 "cntlid": 63, 00:15:06.737 "qid": 0, 00:15:06.737 "state": "enabled", 00:15:06.737 "thread": "nvmf_tgt_poll_group_000", 00:15:06.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:06.737 "listen_address": { 00:15:06.737 "trtype": "TCP", 00:15:06.737 "adrfam": "IPv4", 00:15:06.737 "traddr": "10.0.0.2", 00:15:06.737 "trsvcid": "4420" 00:15:06.737 }, 00:15:06.737 "peer_address": { 00:15:06.738 "trtype": "TCP", 00:15:06.738 "adrfam": "IPv4", 00:15:06.738 "traddr": "10.0.0.1", 00:15:06.738 "trsvcid": "57782" 00:15:06.738 }, 00:15:06.738 "auth": { 00:15:06.738 "state": "completed", 00:15:06.738 "digest": "sha384", 00:15:06.738 "dhgroup": "ffdhe2048" 00:15:06.738 } 00:15:06.738 } 00:15:06.738 ]' 00:15:06.738 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.997 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.997 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.997 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:06.997 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.997 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.997 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.997 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.256 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:15:07.256 11:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:15:08.191 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.191 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:08.191 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.191 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.191 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.191 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.191 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.191 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:08.191 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:08.451 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:08.451 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.451 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:08.451 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:08.451 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:08.451 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.451 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.451 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.451 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.451 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.451 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.451 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.451 11:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.710 00:15:08.710 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.710 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.710 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.969 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.969 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.969 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.969 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.969 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.969 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.969 { 00:15:08.969 "cntlid": 65, 00:15:08.969 "qid": 0, 00:15:08.969 "state": "enabled", 00:15:08.969 "thread": "nvmf_tgt_poll_group_000", 00:15:08.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:08.969 "listen_address": { 00:15:08.969 "trtype": "TCP", 00:15:08.969 "adrfam": "IPv4", 00:15:08.969 "traddr": "10.0.0.2", 00:15:08.969 "trsvcid": "4420" 00:15:08.969 }, 00:15:08.969 "peer_address": { 00:15:08.969 "trtype": "TCP", 00:15:08.969 "adrfam": "IPv4", 00:15:08.969 "traddr": "10.0.0.1", 00:15:08.969 "trsvcid": "57800" 00:15:08.969 }, 00:15:08.969 "auth": { 00:15:08.969 "state": "completed", 00:15:08.969 "digest": "sha384", 00:15:08.969 "dhgroup": "ffdhe3072" 00:15:08.969 } 00:15:08.969 } 00:15:08.969 ]' 00:15:08.969 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.228 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.228 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.228 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:09.228 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.228 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.228 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.228 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.487 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:15:09.487 11:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:15:10.422 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.422 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:10.422 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.422 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.422 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.422 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.422 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:10.422 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:10.679 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:10.679 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.679 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:10.679 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:10.679 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:10.679 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.679 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.680 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.680 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.680 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.680 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.680 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.680 11:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.937 00:15:10.937 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.937 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.937 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.196 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.196 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.196 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.196 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.196 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.196 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.196 { 00:15:11.196 "cntlid": 67, 00:15:11.196 "qid": 0, 00:15:11.196 "state": "enabled", 00:15:11.196 "thread": "nvmf_tgt_poll_group_000", 00:15:11.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:11.196 "listen_address": { 00:15:11.196 "trtype": "TCP", 00:15:11.196 "adrfam": "IPv4", 00:15:11.196 "traddr": "10.0.0.2", 00:15:11.196 "trsvcid": "4420" 00:15:11.196 }, 00:15:11.196 "peer_address": { 00:15:11.196 "trtype": "TCP", 00:15:11.196 "adrfam": "IPv4", 00:15:11.196 "traddr": "10.0.0.1", 00:15:11.196 "trsvcid": "57824" 00:15:11.196 }, 00:15:11.196 "auth": { 00:15:11.196 "state": "completed", 00:15:11.196 "digest": "sha384", 00:15:11.196 "dhgroup": "ffdhe3072" 00:15:11.196 } 00:15:11.196 } 00:15:11.196 ]' 00:15:11.196 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.454 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.454 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.454 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:11.454 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.454 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.454 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.454 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.712 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:15:11.712 11:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:15:12.645 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.645 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:12.645 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.645 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.645 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.645 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.645 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:12.645 11:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:12.903 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:12.904 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.904 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:12.904 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:12.904 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:12.904 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.904 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.904 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.904 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.904 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.904 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.904 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.904 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.162 00:15:13.162 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.162 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.162 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.419 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.419 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.419 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.419 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.419 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.419 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.419 { 00:15:13.419 "cntlid": 69, 00:15:13.419 "qid": 0, 00:15:13.419 "state": "enabled", 00:15:13.419 "thread": "nvmf_tgt_poll_group_000", 00:15:13.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:13.419 "listen_address": { 00:15:13.419 "trtype": "TCP", 00:15:13.419 "adrfam": "IPv4", 00:15:13.419 "traddr": "10.0.0.2", 00:15:13.419 "trsvcid": "4420" 00:15:13.419 }, 00:15:13.419 "peer_address": { 00:15:13.419 "trtype": "TCP", 00:15:13.419 "adrfam": "IPv4", 00:15:13.419 "traddr": "10.0.0.1", 00:15:13.419 "trsvcid": "57842" 00:15:13.419 }, 00:15:13.419 "auth": { 00:15:13.419 "state": "completed", 00:15:13.419 "digest": "sha384", 00:15:13.419 "dhgroup": "ffdhe3072" 00:15:13.419 } 00:15:13.419 } 00:15:13.419 ]' 00:15:13.419 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.677 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.677 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.677 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:13.677 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.677 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.677 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.677 11:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.936 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:15:13.936 11:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:15:14.869 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.869 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:14.869 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.869 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.869 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.869 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.869 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:14.869 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:15.127 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:15.127 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.127 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:15.127 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:15.127 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:15.127 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.127 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:15.127 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.127 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.127 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.127 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:15.127 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.127 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.694 00:15:15.694 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.694 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.694 11:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.953 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.953 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.953 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.953 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.953 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.953 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.953 { 00:15:15.953 "cntlid": 71, 00:15:15.953 "qid": 0, 00:15:15.953 "state": "enabled", 00:15:15.953 "thread": "nvmf_tgt_poll_group_000", 00:15:15.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:15.953 "listen_address": { 00:15:15.953 "trtype": "TCP", 00:15:15.953 "adrfam": "IPv4", 00:15:15.953 "traddr": "10.0.0.2", 00:15:15.953 "trsvcid": "4420" 00:15:15.953 }, 00:15:15.953 "peer_address": { 00:15:15.953 "trtype": "TCP", 00:15:15.953 "adrfam": "IPv4", 00:15:15.953 "traddr": "10.0.0.1", 00:15:15.953 "trsvcid": "57872" 00:15:15.953 }, 00:15:15.953 "auth": { 00:15:15.953 "state": "completed", 00:15:15.953 "digest": "sha384", 00:15:15.953 "dhgroup": "ffdhe3072" 00:15:15.953 } 00:15:15.953 } 00:15:15.953 ]' 00:15:15.953 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.953 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.953 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.953 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:15.953 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.953 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.953 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.953 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.212 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:15:16.212 11:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:15:17.144 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.144 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:17.144 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.144 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.144 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.144 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:17.144 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.144 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:17.144 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:17.402 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:17.402 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.402 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:17.402 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:17.402 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:17.402 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.402 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.402 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.402 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.402 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.402 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.402 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.402 11:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.968 00:15:17.968 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.968 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.968 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.968 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.968 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.968 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.968 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.225 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.225 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.225 { 00:15:18.225 "cntlid": 73, 00:15:18.225 "qid": 0, 00:15:18.225 "state": "enabled", 00:15:18.226 "thread": "nvmf_tgt_poll_group_000", 00:15:18.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:18.226 "listen_address": { 00:15:18.226 "trtype": "TCP", 00:15:18.226 "adrfam": "IPv4", 00:15:18.226 "traddr": "10.0.0.2", 00:15:18.226 "trsvcid": "4420" 00:15:18.226 }, 00:15:18.226 "peer_address": { 00:15:18.226 "trtype": "TCP", 00:15:18.226 "adrfam": "IPv4", 00:15:18.226 "traddr": "10.0.0.1", 00:15:18.226 "trsvcid": "36940" 00:15:18.226 }, 00:15:18.226 "auth": { 00:15:18.226 "state": "completed", 00:15:18.226 "digest": "sha384", 00:15:18.226 "dhgroup": "ffdhe4096" 00:15:18.226 } 00:15:18.226 } 00:15:18.226 ]' 00:15:18.226 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.226 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.226 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.226 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:18.226 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.226 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.226 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.226 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.484 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:15:18.484 11:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:15:19.417 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.417 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:19.417 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.417 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.417 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.417 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.417 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:19.417 11:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:19.676 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:19.676 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.676 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:19.676 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:19.676 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:19.676 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.676 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.676 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.676 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.676 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.676 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.676 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.676 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.242 00:15:20.242 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.242 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.242 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.500 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.500 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.500 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.500 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.500 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.500 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.500 { 00:15:20.500 "cntlid": 75, 00:15:20.500 "qid": 0, 00:15:20.500 "state": "enabled", 00:15:20.500 "thread": "nvmf_tgt_poll_group_000", 00:15:20.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:20.500 "listen_address": { 00:15:20.500 "trtype": "TCP", 00:15:20.500 "adrfam": "IPv4", 00:15:20.500 "traddr": "10.0.0.2", 00:15:20.500 "trsvcid": "4420" 00:15:20.500 }, 00:15:20.500 "peer_address": { 00:15:20.500 "trtype": "TCP", 00:15:20.500 "adrfam": "IPv4", 00:15:20.500 "traddr": "10.0.0.1", 00:15:20.500 "trsvcid": "36968" 00:15:20.500 }, 00:15:20.500 "auth": { 00:15:20.500 "state": "completed", 00:15:20.500 "digest": "sha384", 00:15:20.500 "dhgroup": "ffdhe4096" 00:15:20.500 } 00:15:20.500 } 00:15:20.500 ]' 00:15:20.500 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.500 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.500 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.500 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:20.500 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.500 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.500 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.500 11:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.758 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:15:20.758 11:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:15:21.691 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.691 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:21.691 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.691 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.691 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.691 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.692 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:21.692 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:22.258 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:22.258 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.258 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:22.258 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:22.258 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:22.258 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.258 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.258 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.258 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.258 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.258 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.258 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.258 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.516 00:15:22.516 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.516 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.516 11:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.774 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.774 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.774 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.774 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.774 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.774 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.774 { 00:15:22.774 "cntlid": 77, 00:15:22.774 "qid": 0, 00:15:22.774 "state": "enabled", 00:15:22.774 "thread": "nvmf_tgt_poll_group_000", 00:15:22.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:22.774 "listen_address": { 00:15:22.774 "trtype": "TCP", 00:15:22.774 "adrfam": "IPv4", 00:15:22.774 "traddr": "10.0.0.2", 00:15:22.774 "trsvcid": "4420" 00:15:22.774 }, 00:15:22.774 "peer_address": { 00:15:22.774 "trtype": "TCP", 00:15:22.774 "adrfam": "IPv4", 00:15:22.774 "traddr": "10.0.0.1", 00:15:22.774 "trsvcid": "37000" 00:15:22.774 }, 00:15:22.774 "auth": { 00:15:22.774 "state": "completed", 00:15:22.774 "digest": "sha384", 00:15:22.774 "dhgroup": "ffdhe4096" 00:15:22.774 } 00:15:22.774 } 00:15:22.774 ]' 00:15:22.774 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.774 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.774 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.032 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:23.032 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.032 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.032 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.032 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.290 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:15:23.290 11:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:15:24.233 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.233 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:24.233 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.233 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.233 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.233 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.233 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:24.233 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:24.491 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:24.491 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.491 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:24.491 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:24.491 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:24.491 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.491 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:24.491 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.491 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.491 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.491 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:24.491 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.491 11:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.748 00:15:25.006 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.006 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.006 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.265 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.265 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.265 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.265 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.265 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.265 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.265 { 00:15:25.265 "cntlid": 79, 00:15:25.265 "qid": 0, 00:15:25.265 "state": "enabled", 00:15:25.265 "thread": "nvmf_tgt_poll_group_000", 00:15:25.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:25.265 "listen_address": { 00:15:25.265 "trtype": "TCP", 00:15:25.265 "adrfam": "IPv4", 00:15:25.265 "traddr": "10.0.0.2", 00:15:25.265 "trsvcid": "4420" 00:15:25.265 }, 00:15:25.265 "peer_address": { 00:15:25.265 "trtype": "TCP", 00:15:25.265 "adrfam": "IPv4", 00:15:25.265 "traddr": "10.0.0.1", 00:15:25.265 "trsvcid": "37026" 00:15:25.265 }, 00:15:25.265 "auth": { 00:15:25.265 "state": "completed", 00:15:25.265 "digest": "sha384", 00:15:25.265 "dhgroup": "ffdhe4096" 00:15:25.265 } 00:15:25.265 } 00:15:25.265 ]' 00:15:25.265 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.265 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.265 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.265 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:25.265 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.265 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.265 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.265 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.523 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:15:25.523 11:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:15:26.457 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.457 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:26.457 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.457 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.457 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.457 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.457 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.457 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:26.457 11:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:26.715 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:26.715 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.715 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:26.715 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:26.715 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:26.715 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.715 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.715 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.715 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.715 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.715 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.715 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.715 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.281 00:15:27.281 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.281 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.281 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.540 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.540 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.540 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.540 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.540 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.540 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.540 { 00:15:27.540 "cntlid": 81, 00:15:27.540 "qid": 0, 00:15:27.540 "state": "enabled", 00:15:27.540 "thread": "nvmf_tgt_poll_group_000", 00:15:27.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:27.540 "listen_address": { 00:15:27.540 "trtype": "TCP", 00:15:27.540 "adrfam": "IPv4", 00:15:27.540 "traddr": "10.0.0.2", 00:15:27.540 "trsvcid": "4420" 00:15:27.540 }, 00:15:27.540 "peer_address": { 00:15:27.540 "trtype": "TCP", 00:15:27.540 "adrfam": "IPv4", 00:15:27.540 "traddr": "10.0.0.1", 00:15:27.540 "trsvcid": "54656" 00:15:27.540 }, 00:15:27.540 "auth": { 00:15:27.540 "state": "completed", 00:15:27.540 "digest": "sha384", 00:15:27.540 "dhgroup": "ffdhe6144" 00:15:27.540 } 00:15:27.540 } 00:15:27.540 ]' 00:15:27.540 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.540 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.540 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.800 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:27.800 11:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.800 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.800 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.800 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.058 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:15:28.058 11:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:15:28.992 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.992 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:28.992 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.992 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.992 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.992 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.992 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:28.992 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:29.251 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:29.251 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.251 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:29.251 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:29.251 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:29.251 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.251 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.251 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.251 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.251 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.251 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.251 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.251 11:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.817 00:15:29.817 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.817 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.817 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.076 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.076 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.076 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.076 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.076 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.076 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.076 { 00:15:30.076 "cntlid": 83, 00:15:30.076 "qid": 0, 00:15:30.076 "state": "enabled", 00:15:30.076 "thread": "nvmf_tgt_poll_group_000", 00:15:30.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:30.076 "listen_address": { 00:15:30.076 "trtype": "TCP", 00:15:30.076 "adrfam": "IPv4", 00:15:30.076 "traddr": "10.0.0.2", 00:15:30.076 "trsvcid": "4420" 00:15:30.076 }, 00:15:30.076 "peer_address": { 00:15:30.076 "trtype": "TCP", 00:15:30.076 "adrfam": "IPv4", 00:15:30.076 "traddr": "10.0.0.1", 00:15:30.076 "trsvcid": "54684" 00:15:30.076 }, 00:15:30.076 "auth": { 00:15:30.076 "state": "completed", 00:15:30.076 "digest": "sha384", 00:15:30.076 "dhgroup": "ffdhe6144" 00:15:30.076 } 00:15:30.076 } 00:15:30.076 ]' 00:15:30.076 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.076 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.076 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.076 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:30.076 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.076 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.076 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.076 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.335 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:15:30.335 11:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:15:31.267 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.267 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:31.267 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.267 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.268 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.268 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.268 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:31.268 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:31.526 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:31.526 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.526 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:31.526 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:31.526 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:31.526 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.526 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.526 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.526 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.526 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.526 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.526 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.526 11:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.092 00:15:32.092 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.092 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.092 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.350 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.350 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.350 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.350 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.350 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.350 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.350 { 00:15:32.350 "cntlid": 85, 00:15:32.350 "qid": 0, 00:15:32.350 "state": "enabled", 00:15:32.350 "thread": "nvmf_tgt_poll_group_000", 00:15:32.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:32.350 "listen_address": { 00:15:32.350 "trtype": "TCP", 00:15:32.350 "adrfam": "IPv4", 00:15:32.350 "traddr": "10.0.0.2", 00:15:32.350 "trsvcid": "4420" 00:15:32.350 }, 00:15:32.350 "peer_address": { 00:15:32.350 "trtype": "TCP", 00:15:32.350 "adrfam": "IPv4", 00:15:32.350 "traddr": "10.0.0.1", 00:15:32.350 "trsvcid": "54718" 00:15:32.350 }, 00:15:32.350 "auth": { 00:15:32.350 "state": "completed", 00:15:32.350 "digest": "sha384", 00:15:32.350 "dhgroup": "ffdhe6144" 00:15:32.350 } 00:15:32.350 } 00:15:32.350 ]' 00:15:32.350 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.350 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.350 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.608 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:32.608 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.608 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.608 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.608 11:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.866 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:15:32.866 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:15:33.834 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.834 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:33.834 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.834 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.834 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.834 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.834 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:33.834 11:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.092 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:34.092 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.092 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:34.092 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:34.092 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:34.093 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.093 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:34.093 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.093 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.093 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.093 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:34.093 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:34.093 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:34.737 00:15:34.737 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.737 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.737 11:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.737 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.737 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.737 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.737 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.737 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.737 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.737 { 00:15:34.737 "cntlid": 87, 00:15:34.737 "qid": 0, 00:15:34.737 "state": "enabled", 00:15:34.737 "thread": "nvmf_tgt_poll_group_000", 00:15:34.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:34.737 "listen_address": { 00:15:34.737 "trtype": "TCP", 00:15:34.737 "adrfam": "IPv4", 00:15:34.737 "traddr": "10.0.0.2", 00:15:34.737 "trsvcid": "4420" 00:15:34.737 }, 00:15:34.737 "peer_address": { 00:15:34.737 "trtype": "TCP", 00:15:34.737 "adrfam": "IPv4", 00:15:34.737 "traddr": "10.0.0.1", 00:15:34.737 "trsvcid": "54744" 00:15:34.737 }, 00:15:34.737 "auth": { 00:15:34.737 "state": "completed", 00:15:34.737 "digest": "sha384", 00:15:34.737 "dhgroup": "ffdhe6144" 00:15:34.737 } 00:15:34.737 } 00:15:34.737 ]' 00:15:34.737 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.737 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.737 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.995 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:34.995 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.995 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.995 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.995 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.253 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:15:35.253 11:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:15:36.188 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.188 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:36.188 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.188 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.188 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.188 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.188 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.188 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:36.188 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:36.445 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:36.445 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.445 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:36.445 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:36.445 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:36.445 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.445 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.445 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.445 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.445 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.445 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.445 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.446 11:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.379 00:15:37.379 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.379 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.379 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.637 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.637 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.637 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.637 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.637 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.637 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.637 { 00:15:37.637 "cntlid": 89, 00:15:37.637 "qid": 0, 00:15:37.637 "state": "enabled", 00:15:37.637 "thread": "nvmf_tgt_poll_group_000", 00:15:37.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:37.637 "listen_address": { 00:15:37.637 "trtype": "TCP", 00:15:37.637 "adrfam": "IPv4", 00:15:37.637 "traddr": "10.0.0.2", 00:15:37.637 "trsvcid": "4420" 00:15:37.637 }, 00:15:37.637 "peer_address": { 00:15:37.637 "trtype": "TCP", 00:15:37.637 "adrfam": "IPv4", 00:15:37.637 "traddr": "10.0.0.1", 00:15:37.637 "trsvcid": "43204" 00:15:37.637 }, 00:15:37.637 "auth": { 00:15:37.637 "state": "completed", 00:15:37.637 "digest": "sha384", 00:15:37.637 "dhgroup": "ffdhe8192" 00:15:37.637 } 00:15:37.637 } 00:15:37.637 ]' 00:15:37.637 11:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.637 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.637 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.637 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:37.637 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.896 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.896 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.896 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.153 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:15:38.153 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:15:39.088 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.088 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:39.088 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.088 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.088 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.088 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.088 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:39.088 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:39.346 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:39.346 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.346 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:39.346 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:39.346 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:39.346 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.346 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.346 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.346 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.346 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.346 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.346 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.346 11:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.279 00:15:40.279 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.279 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.279 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.537 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.537 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.537 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.537 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.537 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.537 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.537 { 00:15:40.537 "cntlid": 91, 00:15:40.537 "qid": 0, 00:15:40.537 "state": "enabled", 00:15:40.537 "thread": "nvmf_tgt_poll_group_000", 00:15:40.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:40.538 "listen_address": { 00:15:40.538 "trtype": "TCP", 00:15:40.538 "adrfam": "IPv4", 00:15:40.538 "traddr": "10.0.0.2", 00:15:40.538 "trsvcid": "4420" 00:15:40.538 }, 00:15:40.538 "peer_address": { 00:15:40.538 "trtype": "TCP", 00:15:40.538 "adrfam": "IPv4", 00:15:40.538 "traddr": "10.0.0.1", 00:15:40.538 "trsvcid": "43214" 00:15:40.538 }, 00:15:40.538 "auth": { 00:15:40.538 "state": "completed", 00:15:40.538 "digest": "sha384", 00:15:40.538 "dhgroup": "ffdhe8192" 00:15:40.538 } 00:15:40.538 } 00:15:40.538 ]' 00:15:40.538 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.538 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.538 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.538 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:40.538 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.538 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.538 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.538 11:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.796 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:15:40.796 11:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:15:41.731 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.989 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:41.989 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.989 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.989 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.989 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.989 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.989 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:42.247 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:42.247 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.247 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.247 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:42.247 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:42.247 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.247 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.247 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.247 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.247 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.247 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.247 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.247 11:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.182 00:15:43.182 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.182 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.182 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.440 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.440 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.440 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.440 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.440 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.440 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.440 { 00:15:43.440 "cntlid": 93, 00:15:43.440 "qid": 0, 00:15:43.440 "state": "enabled", 00:15:43.440 "thread": "nvmf_tgt_poll_group_000", 00:15:43.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:43.440 "listen_address": { 00:15:43.440 "trtype": "TCP", 00:15:43.440 "adrfam": "IPv4", 00:15:43.440 "traddr": "10.0.0.2", 00:15:43.440 "trsvcid": "4420" 00:15:43.440 }, 00:15:43.440 "peer_address": { 00:15:43.440 "trtype": "TCP", 00:15:43.440 "adrfam": "IPv4", 00:15:43.440 "traddr": "10.0.0.1", 00:15:43.440 "trsvcid": "43250" 00:15:43.440 }, 00:15:43.440 "auth": { 00:15:43.440 "state": "completed", 00:15:43.440 "digest": "sha384", 00:15:43.440 "dhgroup": "ffdhe8192" 00:15:43.440 } 00:15:43.440 } 00:15:43.440 ]' 00:15:43.440 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.440 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.440 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.440 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:43.440 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.440 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.440 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.440 11:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.697 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:15:43.697 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:15:44.630 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.630 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:44.630 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.630 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.630 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.630 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.630 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:44.630 11:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:44.888 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:44.888 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.888 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:44.888 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:44.888 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:44.888 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.888 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:44.888 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.888 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.888 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.888 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:44.888 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:44.888 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.821 00:15:45.821 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.821 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.821 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.080 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.080 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.080 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.080 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.080 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.080 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.080 { 00:15:46.080 "cntlid": 95, 00:15:46.080 "qid": 0, 00:15:46.080 "state": "enabled", 00:15:46.080 "thread": "nvmf_tgt_poll_group_000", 00:15:46.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:46.080 "listen_address": { 00:15:46.080 "trtype": "TCP", 00:15:46.080 "adrfam": "IPv4", 00:15:46.080 "traddr": "10.0.0.2", 00:15:46.080 "trsvcid": "4420" 00:15:46.080 }, 00:15:46.080 "peer_address": { 00:15:46.080 "trtype": "TCP", 00:15:46.080 "adrfam": "IPv4", 00:15:46.080 "traddr": "10.0.0.1", 00:15:46.080 "trsvcid": "43282" 00:15:46.080 }, 00:15:46.080 "auth": { 00:15:46.080 "state": "completed", 00:15:46.080 "digest": "sha384", 00:15:46.080 "dhgroup": "ffdhe8192" 00:15:46.080 } 00:15:46.080 } 00:15:46.080 ]' 00:15:46.080 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.080 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.080 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.338 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:46.338 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.338 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.338 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.338 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.596 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:15:46.596 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:15:47.530 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.530 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:47.530 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.530 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.530 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.530 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:47.530 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:47.530 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.530 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:47.530 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:47.788 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:47.788 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.788 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:47.788 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:47.788 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:47.788 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.788 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.788 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.788 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.788 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.788 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.788 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:47.788 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.046 00:15:48.046 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.046 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.046 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.304 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.304 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.304 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.304 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.304 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.304 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.304 { 00:15:48.304 "cntlid": 97, 00:15:48.304 "qid": 0, 00:15:48.304 "state": "enabled", 00:15:48.304 "thread": "nvmf_tgt_poll_group_000", 00:15:48.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:48.304 "listen_address": { 00:15:48.304 "trtype": "TCP", 00:15:48.304 "adrfam": "IPv4", 00:15:48.304 "traddr": "10.0.0.2", 00:15:48.304 "trsvcid": "4420" 00:15:48.304 }, 00:15:48.304 "peer_address": { 00:15:48.304 "trtype": "TCP", 00:15:48.304 "adrfam": "IPv4", 00:15:48.304 "traddr": "10.0.0.1", 00:15:48.304 "trsvcid": "52574" 00:15:48.304 }, 00:15:48.304 "auth": { 00:15:48.304 "state": "completed", 00:15:48.304 "digest": "sha512", 00:15:48.304 "dhgroup": "null" 00:15:48.304 } 00:15:48.304 } 00:15:48.304 ]' 00:15:48.304 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.562 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:48.562 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.562 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:48.562 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.562 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.562 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.562 11:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.819 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:15:48.819 11:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:15:49.749 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.749 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:49.749 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.749 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.749 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.749 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.749 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:49.749 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.005 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:50.005 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.005 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:50.005 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:50.005 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:50.005 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.005 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.005 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.005 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.005 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.005 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.005 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.005 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.262 00:15:50.262 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.262 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.262 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.519 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.519 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.519 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.519 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.519 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.519 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.519 { 00:15:50.519 "cntlid": 99, 00:15:50.519 "qid": 0, 00:15:50.519 "state": "enabled", 00:15:50.519 "thread": "nvmf_tgt_poll_group_000", 00:15:50.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:50.519 "listen_address": { 00:15:50.519 "trtype": "TCP", 00:15:50.519 "adrfam": "IPv4", 00:15:50.519 "traddr": "10.0.0.2", 00:15:50.519 "trsvcid": "4420" 00:15:50.519 }, 00:15:50.519 "peer_address": { 00:15:50.519 "trtype": "TCP", 00:15:50.519 "adrfam": "IPv4", 00:15:50.519 "traddr": "10.0.0.1", 00:15:50.519 "trsvcid": "52604" 00:15:50.519 }, 00:15:50.519 "auth": { 00:15:50.519 "state": "completed", 00:15:50.519 "digest": "sha512", 00:15:50.519 "dhgroup": "null" 00:15:50.519 } 00:15:50.519 } 00:15:50.519 ]' 00:15:50.519 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.777 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.777 11:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.777 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:50.777 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.777 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.777 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.777 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.035 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:15:51.035 11:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:15:51.974 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.974 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:51.974 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.974 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.974 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.974 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.974 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:51.974 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:52.232 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:52.232 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.233 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:52.233 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:52.233 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:52.233 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.233 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.233 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.233 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.233 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.233 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.233 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.233 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.490 00:15:52.490 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.490 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.490 11:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.748 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.748 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.748 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.748 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.748 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.748 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.748 { 00:15:52.748 "cntlid": 101, 00:15:52.748 "qid": 0, 00:15:52.748 "state": "enabled", 00:15:52.748 "thread": "nvmf_tgt_poll_group_000", 00:15:52.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:52.748 "listen_address": { 00:15:52.748 "trtype": "TCP", 00:15:52.748 "adrfam": "IPv4", 00:15:52.748 "traddr": "10.0.0.2", 00:15:52.748 "trsvcid": "4420" 00:15:52.748 }, 00:15:52.748 "peer_address": { 00:15:52.748 "trtype": "TCP", 00:15:52.748 "adrfam": "IPv4", 00:15:52.748 "traddr": "10.0.0.1", 00:15:52.748 "trsvcid": "52636" 00:15:52.748 }, 00:15:52.748 "auth": { 00:15:52.748 "state": "completed", 00:15:52.748 "digest": "sha512", 00:15:52.748 "dhgroup": "null" 00:15:52.748 } 00:15:52.748 } 00:15:52.748 ]' 00:15:52.748 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.748 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.748 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.007 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:53.007 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.007 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.007 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.007 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.265 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:15:53.265 11:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:15:54.198 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.198 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:54.198 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.198 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.198 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.198 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.198 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:54.198 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:54.459 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:54.459 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.459 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:54.459 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:54.459 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:54.459 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.459 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:54.459 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.459 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.459 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.459 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:54.459 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.460 11:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.720 00:15:54.720 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.720 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.720 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.977 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.977 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.977 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.977 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.977 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.977 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.977 { 00:15:54.977 "cntlid": 103, 00:15:54.977 "qid": 0, 00:15:54.978 "state": "enabled", 00:15:54.978 "thread": "nvmf_tgt_poll_group_000", 00:15:54.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:54.978 "listen_address": { 00:15:54.978 "trtype": "TCP", 00:15:54.978 "adrfam": "IPv4", 00:15:54.978 "traddr": "10.0.0.2", 00:15:54.978 "trsvcid": "4420" 00:15:54.978 }, 00:15:54.978 "peer_address": { 00:15:54.978 "trtype": "TCP", 00:15:54.978 "adrfam": "IPv4", 00:15:54.978 "traddr": "10.0.0.1", 00:15:54.978 "trsvcid": "52666" 00:15:54.978 }, 00:15:54.978 "auth": { 00:15:54.978 "state": "completed", 00:15:54.978 "digest": "sha512", 00:15:54.978 "dhgroup": "null" 00:15:54.978 } 00:15:54.978 } 00:15:54.978 ]' 00:15:54.978 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.978 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:54.978 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.978 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:54.978 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.236 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.236 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.236 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.494 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:15:55.494 11:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.428 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.686 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.686 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.686 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.686 11:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.944 00:15:56.944 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.944 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.944 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.202 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.202 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.202 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.202 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.202 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.202 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.202 { 00:15:57.202 "cntlid": 105, 00:15:57.202 "qid": 0, 00:15:57.202 "state": "enabled", 00:15:57.202 "thread": "nvmf_tgt_poll_group_000", 00:15:57.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:57.202 "listen_address": { 00:15:57.202 "trtype": "TCP", 00:15:57.202 "adrfam": "IPv4", 00:15:57.202 "traddr": "10.0.0.2", 00:15:57.202 "trsvcid": "4420" 00:15:57.202 }, 00:15:57.202 "peer_address": { 00:15:57.202 "trtype": "TCP", 00:15:57.202 "adrfam": "IPv4", 00:15:57.202 "traddr": "10.0.0.1", 00:15:57.202 "trsvcid": "49468" 00:15:57.202 }, 00:15:57.202 "auth": { 00:15:57.202 "state": "completed", 00:15:57.202 "digest": "sha512", 00:15:57.202 "dhgroup": "ffdhe2048" 00:15:57.202 } 00:15:57.202 } 00:15:57.202 ]' 00:15:57.202 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.202 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.202 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.202 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.202 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.202 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.202 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.202 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.461 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:15:57.461 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:15:58.395 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.395 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:58.395 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.395 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.395 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.395 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.395 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:58.395 11:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:58.654 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:15:58.654 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.654 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:58.654 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:58.654 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:58.654 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.654 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.654 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.654 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.654 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.654 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.654 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.654 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.219 00:15:59.219 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.219 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.219 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.480 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.480 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.480 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.480 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.480 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.480 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.480 { 00:15:59.480 "cntlid": 107, 00:15:59.480 "qid": 0, 00:15:59.480 "state": "enabled", 00:15:59.480 "thread": "nvmf_tgt_poll_group_000", 00:15:59.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:59.480 "listen_address": { 00:15:59.480 "trtype": "TCP", 00:15:59.480 "adrfam": "IPv4", 00:15:59.480 "traddr": "10.0.0.2", 00:15:59.480 "trsvcid": "4420" 00:15:59.480 }, 00:15:59.480 "peer_address": { 00:15:59.480 "trtype": "TCP", 00:15:59.480 "adrfam": "IPv4", 00:15:59.480 "traddr": "10.0.0.1", 00:15:59.480 "trsvcid": "49502" 00:15:59.480 }, 00:15:59.480 "auth": { 00:15:59.480 "state": "completed", 00:15:59.480 "digest": "sha512", 00:15:59.480 "dhgroup": "ffdhe2048" 00:15:59.480 } 00:15:59.480 } 00:15:59.480 ]' 00:15:59.481 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.481 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.481 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.481 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:59.481 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.481 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.481 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.481 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.740 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:15:59.740 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:16:00.673 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.673 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:00.673 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.673 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.673 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.673 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.673 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:00.673 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:00.931 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:00.931 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.931 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:00.931 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:00.931 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:00.931 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.931 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.931 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.931 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.931 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.931 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.931 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.931 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.498 00:16:01.498 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.498 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.498 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.757 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.757 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.757 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.757 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.757 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.757 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.757 { 00:16:01.757 "cntlid": 109, 00:16:01.757 "qid": 0, 00:16:01.757 "state": "enabled", 00:16:01.757 "thread": "nvmf_tgt_poll_group_000", 00:16:01.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:01.757 "listen_address": { 00:16:01.757 "trtype": "TCP", 00:16:01.757 "adrfam": "IPv4", 00:16:01.757 "traddr": "10.0.0.2", 00:16:01.757 "trsvcid": "4420" 00:16:01.757 }, 00:16:01.757 "peer_address": { 00:16:01.757 "trtype": "TCP", 00:16:01.757 "adrfam": "IPv4", 00:16:01.757 "traddr": "10.0.0.1", 00:16:01.757 "trsvcid": "49526" 00:16:01.757 }, 00:16:01.757 "auth": { 00:16:01.757 "state": "completed", 00:16:01.757 "digest": "sha512", 00:16:01.757 "dhgroup": "ffdhe2048" 00:16:01.757 } 00:16:01.757 } 00:16:01.757 ]' 00:16:01.757 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.757 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.757 11:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.757 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:01.757 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.757 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.757 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.757 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.015 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:16:02.015 11:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:16:02.948 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.948 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:02.948 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.948 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.948 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.948 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.948 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:02.948 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:03.206 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:03.206 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.206 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:03.206 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:03.206 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:03.206 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.206 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:03.206 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.206 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.206 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.206 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:03.206 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.206 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.470 00:16:03.470 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.470 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.470 11:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.727 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.727 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.727 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.727 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.727 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.727 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.727 { 00:16:03.727 "cntlid": 111, 00:16:03.727 "qid": 0, 00:16:03.727 "state": "enabled", 00:16:03.727 "thread": "nvmf_tgt_poll_group_000", 00:16:03.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:03.727 "listen_address": { 00:16:03.727 "trtype": "TCP", 00:16:03.727 "adrfam": "IPv4", 00:16:03.727 "traddr": "10.0.0.2", 00:16:03.727 "trsvcid": "4420" 00:16:03.727 }, 00:16:03.727 "peer_address": { 00:16:03.727 "trtype": "TCP", 00:16:03.727 "adrfam": "IPv4", 00:16:03.727 "traddr": "10.0.0.1", 00:16:03.727 "trsvcid": "49548" 00:16:03.727 }, 00:16:03.727 "auth": { 00:16:03.727 "state": "completed", 00:16:03.727 "digest": "sha512", 00:16:03.727 "dhgroup": "ffdhe2048" 00:16:03.727 } 00:16:03.727 } 00:16:03.727 ]' 00:16:03.727 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.985 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.985 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.985 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:03.985 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.985 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.985 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.985 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.243 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:16:04.243 11:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:16:05.212 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.212 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:05.212 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.212 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.212 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.212 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.212 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.212 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:05.212 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:05.470 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:05.470 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.470 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:05.470 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:05.470 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:05.470 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.470 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.470 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.470 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.470 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.470 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.470 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.470 11:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.728 00:16:05.728 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.728 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.728 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.985 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.985 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.985 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.985 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.985 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.986 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.986 { 00:16:05.986 "cntlid": 113, 00:16:05.986 "qid": 0, 00:16:05.986 "state": "enabled", 00:16:05.986 "thread": "nvmf_tgt_poll_group_000", 00:16:05.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:05.986 "listen_address": { 00:16:05.986 "trtype": "TCP", 00:16:05.986 "adrfam": "IPv4", 00:16:05.986 "traddr": "10.0.0.2", 00:16:05.986 "trsvcid": "4420" 00:16:05.986 }, 00:16:05.986 "peer_address": { 00:16:05.986 "trtype": "TCP", 00:16:05.986 "adrfam": "IPv4", 00:16:05.986 "traddr": "10.0.0.1", 00:16:05.986 "trsvcid": "49578" 00:16:05.986 }, 00:16:05.986 "auth": { 00:16:05.986 "state": "completed", 00:16:05.986 "digest": "sha512", 00:16:05.986 "dhgroup": "ffdhe3072" 00:16:05.986 } 00:16:05.986 } 00:16:05.986 ]' 00:16:05.986 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.986 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.243 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.243 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:06.243 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.243 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.243 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.243 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.500 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:16:06.500 11:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:16:07.434 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.434 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:07.434 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.434 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.434 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.434 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.434 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:07.434 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:07.692 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:07.692 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.692 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:07.692 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:07.692 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:07.692 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.692 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.692 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.692 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.692 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.692 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.692 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.692 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.950 00:16:07.950 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.950 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.950 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.208 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.208 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.208 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.208 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.208 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.208 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.208 { 00:16:08.208 "cntlid": 115, 00:16:08.208 "qid": 0, 00:16:08.208 "state": "enabled", 00:16:08.208 "thread": "nvmf_tgt_poll_group_000", 00:16:08.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:08.208 "listen_address": { 00:16:08.208 "trtype": "TCP", 00:16:08.208 "adrfam": "IPv4", 00:16:08.208 "traddr": "10.0.0.2", 00:16:08.208 "trsvcid": "4420" 00:16:08.208 }, 00:16:08.208 "peer_address": { 00:16:08.208 "trtype": "TCP", 00:16:08.208 "adrfam": "IPv4", 00:16:08.208 "traddr": "10.0.0.1", 00:16:08.208 "trsvcid": "56450" 00:16:08.208 }, 00:16:08.208 "auth": { 00:16:08.208 "state": "completed", 00:16:08.208 "digest": "sha512", 00:16:08.208 "dhgroup": "ffdhe3072" 00:16:08.208 } 00:16:08.208 } 00:16:08.208 ]' 00:16:08.208 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.467 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.467 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.467 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:08.467 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.467 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.467 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.467 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.725 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:16:08.725 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:16:09.659 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.659 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:09.659 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.659 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.659 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.659 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.659 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:09.659 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:09.917 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:09.917 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.917 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:09.917 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:09.917 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:09.917 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.917 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.917 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.917 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.917 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.917 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.917 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.917 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.175 00:16:10.175 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.175 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.175 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.741 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.741 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.741 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.741 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.741 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.741 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.741 { 00:16:10.741 "cntlid": 117, 00:16:10.741 "qid": 0, 00:16:10.741 "state": "enabled", 00:16:10.741 "thread": "nvmf_tgt_poll_group_000", 00:16:10.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:10.741 "listen_address": { 00:16:10.741 "trtype": "TCP", 00:16:10.741 "adrfam": "IPv4", 00:16:10.741 "traddr": "10.0.0.2", 00:16:10.741 "trsvcid": "4420" 00:16:10.741 }, 00:16:10.741 "peer_address": { 00:16:10.741 "trtype": "TCP", 00:16:10.741 "adrfam": "IPv4", 00:16:10.741 "traddr": "10.0.0.1", 00:16:10.741 "trsvcid": "56468" 00:16:10.741 }, 00:16:10.741 "auth": { 00:16:10.741 "state": "completed", 00:16:10.741 "digest": "sha512", 00:16:10.741 "dhgroup": "ffdhe3072" 00:16:10.741 } 00:16:10.741 } 00:16:10.741 ]' 00:16:10.741 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.741 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.741 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.741 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:10.741 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.741 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.741 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.741 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.999 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:16:10.999 11:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:16:11.934 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.934 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:11.934 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.934 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.934 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.934 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.934 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:11.934 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.192 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:12.192 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.192 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:12.192 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:12.192 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:12.192 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.192 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:12.192 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.192 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.192 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.192 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:12.192 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.192 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.758 00:16:12.758 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.758 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.758 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.758 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.758 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.758 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.758 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.015 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.015 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.015 { 00:16:13.015 "cntlid": 119, 00:16:13.015 "qid": 0, 00:16:13.015 "state": "enabled", 00:16:13.015 "thread": "nvmf_tgt_poll_group_000", 00:16:13.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:13.015 "listen_address": { 00:16:13.015 "trtype": "TCP", 00:16:13.015 "adrfam": "IPv4", 00:16:13.015 "traddr": "10.0.0.2", 00:16:13.015 "trsvcid": "4420" 00:16:13.015 }, 00:16:13.015 "peer_address": { 00:16:13.015 "trtype": "TCP", 00:16:13.015 "adrfam": "IPv4", 00:16:13.015 "traddr": "10.0.0.1", 00:16:13.015 "trsvcid": "56490" 00:16:13.015 }, 00:16:13.015 "auth": { 00:16:13.016 "state": "completed", 00:16:13.016 "digest": "sha512", 00:16:13.016 "dhgroup": "ffdhe3072" 00:16:13.016 } 00:16:13.016 } 00:16:13.016 ]' 00:16:13.016 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.016 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.016 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.016 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:13.016 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.016 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.016 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.016 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.274 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:16:13.274 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:16:14.206 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.206 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:14.206 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.206 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.206 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.206 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.206 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.206 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:14.206 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:14.463 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:14.463 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.463 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:14.463 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:14.463 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:14.463 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.463 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.463 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.463 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.463 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.463 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.463 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.463 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.028 00:16:15.028 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.028 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.028 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.287 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.287 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.287 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.287 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.287 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.287 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.287 { 00:16:15.287 "cntlid": 121, 00:16:15.287 "qid": 0, 00:16:15.287 "state": "enabled", 00:16:15.287 "thread": "nvmf_tgt_poll_group_000", 00:16:15.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:15.287 "listen_address": { 00:16:15.287 "trtype": "TCP", 00:16:15.287 "adrfam": "IPv4", 00:16:15.287 "traddr": "10.0.0.2", 00:16:15.287 "trsvcid": "4420" 00:16:15.287 }, 00:16:15.287 "peer_address": { 00:16:15.287 "trtype": "TCP", 00:16:15.287 "adrfam": "IPv4", 00:16:15.287 "traddr": "10.0.0.1", 00:16:15.287 "trsvcid": "56518" 00:16:15.287 }, 00:16:15.287 "auth": { 00:16:15.287 "state": "completed", 00:16:15.287 "digest": "sha512", 00:16:15.287 "dhgroup": "ffdhe4096" 00:16:15.287 } 00:16:15.287 } 00:16:15.287 ]' 00:16:15.287 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.287 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.287 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.287 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:15.287 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.287 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.287 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.287 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.545 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:16:15.545 11:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:16:16.479 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.479 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:16.479 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.479 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.479 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.479 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.479 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:16.479 11:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:16.737 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:16.737 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.737 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:16.737 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:16.737 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:16.737 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.737 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.737 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.737 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.737 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.737 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.737 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.737 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.303 00:16:17.303 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.303 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.303 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.570 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.570 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.570 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.570 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.570 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.571 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.571 { 00:16:17.571 "cntlid": 123, 00:16:17.571 "qid": 0, 00:16:17.571 "state": "enabled", 00:16:17.571 "thread": "nvmf_tgt_poll_group_000", 00:16:17.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:17.571 "listen_address": { 00:16:17.571 "trtype": "TCP", 00:16:17.571 "adrfam": "IPv4", 00:16:17.571 "traddr": "10.0.0.2", 00:16:17.571 "trsvcid": "4420" 00:16:17.571 }, 00:16:17.571 "peer_address": { 00:16:17.571 "trtype": "TCP", 00:16:17.571 "adrfam": "IPv4", 00:16:17.571 "traddr": "10.0.0.1", 00:16:17.571 "trsvcid": "58488" 00:16:17.571 }, 00:16:17.571 "auth": { 00:16:17.571 "state": "completed", 00:16:17.571 "digest": "sha512", 00:16:17.571 "dhgroup": "ffdhe4096" 00:16:17.571 } 00:16:17.571 } 00:16:17.571 ]' 00:16:17.571 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.571 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.571 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.571 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:17.571 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.571 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.571 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.571 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.835 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:16:17.835 11:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:16:18.769 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.769 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:18.769 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.769 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.769 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.769 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.769 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:18.769 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:19.027 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:19.027 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.027 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:19.027 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:19.027 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:19.027 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.027 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.027 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.027 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.027 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.027 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.027 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.027 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.592 00:16:19.592 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.592 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.592 11:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.850 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.850 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.850 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.850 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.850 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.850 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.850 { 00:16:19.850 "cntlid": 125, 00:16:19.850 "qid": 0, 00:16:19.850 "state": "enabled", 00:16:19.850 "thread": "nvmf_tgt_poll_group_000", 00:16:19.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:19.850 "listen_address": { 00:16:19.850 "trtype": "TCP", 00:16:19.850 "adrfam": "IPv4", 00:16:19.850 "traddr": "10.0.0.2", 00:16:19.850 "trsvcid": "4420" 00:16:19.850 }, 00:16:19.850 "peer_address": { 00:16:19.850 "trtype": "TCP", 00:16:19.850 "adrfam": "IPv4", 00:16:19.850 "traddr": "10.0.0.1", 00:16:19.850 "trsvcid": "58508" 00:16:19.850 }, 00:16:19.850 "auth": { 00:16:19.850 "state": "completed", 00:16:19.850 "digest": "sha512", 00:16:19.850 "dhgroup": "ffdhe4096" 00:16:19.850 } 00:16:19.850 } 00:16:19.850 ]' 00:16:19.850 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.850 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.850 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.850 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:19.850 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.850 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.850 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.850 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.108 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:16:20.108 11:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:16:21.040 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.040 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:21.040 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.040 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.040 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.040 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.040 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:21.040 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:21.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:21.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:21.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:21.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:21.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:21.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:21.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.299 11:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.865 00:16:21.865 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.865 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.865 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.123 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.123 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.123 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.123 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.123 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.123 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.123 { 00:16:22.123 "cntlid": 127, 00:16:22.123 "qid": 0, 00:16:22.123 "state": "enabled", 00:16:22.123 "thread": "nvmf_tgt_poll_group_000", 00:16:22.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:22.123 "listen_address": { 00:16:22.123 "trtype": "TCP", 00:16:22.123 "adrfam": "IPv4", 00:16:22.123 "traddr": "10.0.0.2", 00:16:22.123 "trsvcid": "4420" 00:16:22.123 }, 00:16:22.123 "peer_address": { 00:16:22.123 "trtype": "TCP", 00:16:22.123 "adrfam": "IPv4", 00:16:22.123 "traddr": "10.0.0.1", 00:16:22.123 "trsvcid": "58528" 00:16:22.123 }, 00:16:22.123 "auth": { 00:16:22.123 "state": "completed", 00:16:22.123 "digest": "sha512", 00:16:22.123 "dhgroup": "ffdhe4096" 00:16:22.123 } 00:16:22.123 } 00:16:22.123 ]' 00:16:22.123 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.123 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.123 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.123 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:22.123 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.123 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.123 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.123 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.381 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:16:22.381 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:16:23.315 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.315 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:23.315 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.315 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.315 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.315 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.315 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.315 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:23.315 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:23.574 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:23.574 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.574 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:23.574 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:23.574 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:23.574 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.574 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.574 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.574 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.574 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.574 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.574 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.574 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.139 00:16:24.139 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.139 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.139 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.397 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.397 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.397 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.397 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.397 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.397 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.397 { 00:16:24.397 "cntlid": 129, 00:16:24.397 "qid": 0, 00:16:24.397 "state": "enabled", 00:16:24.397 "thread": "nvmf_tgt_poll_group_000", 00:16:24.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:24.397 "listen_address": { 00:16:24.397 "trtype": "TCP", 00:16:24.397 "adrfam": "IPv4", 00:16:24.397 "traddr": "10.0.0.2", 00:16:24.397 "trsvcid": "4420" 00:16:24.397 }, 00:16:24.397 "peer_address": { 00:16:24.397 "trtype": "TCP", 00:16:24.397 "adrfam": "IPv4", 00:16:24.397 "traddr": "10.0.0.1", 00:16:24.397 "trsvcid": "58562" 00:16:24.397 }, 00:16:24.397 "auth": { 00:16:24.397 "state": "completed", 00:16:24.397 "digest": "sha512", 00:16:24.397 "dhgroup": "ffdhe6144" 00:16:24.397 } 00:16:24.397 } 00:16:24.397 ]' 00:16:24.397 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.397 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.397 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.397 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:24.397 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.654 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.654 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.654 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.912 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:16:24.912 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:16:25.845 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.845 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:25.845 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.845 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.846 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.846 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.846 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:25.846 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:26.103 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:26.103 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.103 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:26.103 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:26.103 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:26.103 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.103 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.103 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.103 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.103 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.103 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.104 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.104 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.670 00:16:26.670 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.670 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.670 11:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.928 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.928 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.928 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.928 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.928 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.928 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.928 { 00:16:26.928 "cntlid": 131, 00:16:26.928 "qid": 0, 00:16:26.928 "state": "enabled", 00:16:26.928 "thread": "nvmf_tgt_poll_group_000", 00:16:26.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:26.928 "listen_address": { 00:16:26.928 "trtype": "TCP", 00:16:26.928 "adrfam": "IPv4", 00:16:26.928 "traddr": "10.0.0.2", 00:16:26.928 "trsvcid": "4420" 00:16:26.928 }, 00:16:26.928 "peer_address": { 00:16:26.928 "trtype": "TCP", 00:16:26.928 "adrfam": "IPv4", 00:16:26.928 "traddr": "10.0.0.1", 00:16:26.928 "trsvcid": "48666" 00:16:26.928 }, 00:16:26.928 "auth": { 00:16:26.928 "state": "completed", 00:16:26.928 "digest": "sha512", 00:16:26.928 "dhgroup": "ffdhe6144" 00:16:26.928 } 00:16:26.928 } 00:16:26.928 ]' 00:16:26.928 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.928 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.928 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.928 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:26.928 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.928 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.928 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.928 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.186 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:16:27.186 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:16:28.119 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.119 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:28.119 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.119 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.119 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.119 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.119 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:28.119 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:28.377 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:28.377 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.377 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:28.377 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:28.377 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:28.377 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.377 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.377 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.377 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.377 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.377 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.377 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.377 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.942 00:16:28.942 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.943 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.943 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.201 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.201 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.201 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.201 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.201 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.201 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.201 { 00:16:29.201 "cntlid": 133, 00:16:29.201 "qid": 0, 00:16:29.201 "state": "enabled", 00:16:29.201 "thread": "nvmf_tgt_poll_group_000", 00:16:29.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:29.201 "listen_address": { 00:16:29.201 "trtype": "TCP", 00:16:29.201 "adrfam": "IPv4", 00:16:29.201 "traddr": "10.0.0.2", 00:16:29.201 "trsvcid": "4420" 00:16:29.201 }, 00:16:29.201 "peer_address": { 00:16:29.201 "trtype": "TCP", 00:16:29.201 "adrfam": "IPv4", 00:16:29.201 "traddr": "10.0.0.1", 00:16:29.201 "trsvcid": "48692" 00:16:29.201 }, 00:16:29.201 "auth": { 00:16:29.201 "state": "completed", 00:16:29.201 "digest": "sha512", 00:16:29.201 "dhgroup": "ffdhe6144" 00:16:29.201 } 00:16:29.201 } 00:16:29.201 ]' 00:16:29.201 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.201 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.201 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.201 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:29.201 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.201 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.201 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.201 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.459 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:16:29.459 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:16:30.392 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.650 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:30.650 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.650 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.650 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.651 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.651 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.651 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.909 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:30.909 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.909 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:30.909 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:30.909 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:30.909 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.909 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:30.909 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.909 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.909 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.909 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:30.909 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.909 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.474 00:16:31.474 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.474 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.474 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.732 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.732 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.732 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.732 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.732 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.732 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.732 { 00:16:31.732 "cntlid": 135, 00:16:31.732 "qid": 0, 00:16:31.732 "state": "enabled", 00:16:31.732 "thread": "nvmf_tgt_poll_group_000", 00:16:31.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:31.732 "listen_address": { 00:16:31.732 "trtype": "TCP", 00:16:31.732 "adrfam": "IPv4", 00:16:31.732 "traddr": "10.0.0.2", 00:16:31.732 "trsvcid": "4420" 00:16:31.732 }, 00:16:31.732 "peer_address": { 00:16:31.732 "trtype": "TCP", 00:16:31.732 "adrfam": "IPv4", 00:16:31.732 "traddr": "10.0.0.1", 00:16:31.732 "trsvcid": "48718" 00:16:31.732 }, 00:16:31.732 "auth": { 00:16:31.732 "state": "completed", 00:16:31.732 "digest": "sha512", 00:16:31.732 "dhgroup": "ffdhe6144" 00:16:31.732 } 00:16:31.732 } 00:16:31.732 ]' 00:16:31.732 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.732 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.732 11:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.732 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:31.732 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.732 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.732 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.732 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.990 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:16:31.990 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:16:32.924 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.924 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:32.924 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.924 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.924 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.924 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.924 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.924 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:32.924 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:33.182 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:33.182 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.182 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.182 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:33.182 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:33.182 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.182 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.182 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.182 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.182 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.182 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.182 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.182 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.127 00:16:34.127 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.127 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.127 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.416 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.416 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.416 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.416 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.416 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.416 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.416 { 00:16:34.416 "cntlid": 137, 00:16:34.416 "qid": 0, 00:16:34.416 "state": "enabled", 00:16:34.416 "thread": "nvmf_tgt_poll_group_000", 00:16:34.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:34.416 "listen_address": { 00:16:34.416 "trtype": "TCP", 00:16:34.416 "adrfam": "IPv4", 00:16:34.416 "traddr": "10.0.0.2", 00:16:34.416 "trsvcid": "4420" 00:16:34.416 }, 00:16:34.416 "peer_address": { 00:16:34.416 "trtype": "TCP", 00:16:34.416 "adrfam": "IPv4", 00:16:34.416 "traddr": "10.0.0.1", 00:16:34.416 "trsvcid": "48752" 00:16:34.416 }, 00:16:34.416 "auth": { 00:16:34.416 "state": "completed", 00:16:34.416 "digest": "sha512", 00:16:34.416 "dhgroup": "ffdhe8192" 00:16:34.416 } 00:16:34.416 } 00:16:34.416 ]' 00:16:34.416 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.416 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.416 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.416 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:34.416 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.416 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.416 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.416 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.726 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:16:34.726 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:16:35.686 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.686 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:35.686 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.686 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.686 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.686 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.686 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:35.686 11:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:35.944 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:35.944 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.944 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.944 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:35.944 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:35.944 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.944 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.944 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.944 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.944 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.944 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.944 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.944 11:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.877 00:16:36.877 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.877 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.877 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.135 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.135 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.135 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.135 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.135 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.135 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.135 { 00:16:37.135 "cntlid": 139, 00:16:37.135 "qid": 0, 00:16:37.135 "state": "enabled", 00:16:37.135 "thread": "nvmf_tgt_poll_group_000", 00:16:37.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:37.135 "listen_address": { 00:16:37.135 "trtype": "TCP", 00:16:37.135 "adrfam": "IPv4", 00:16:37.135 "traddr": "10.0.0.2", 00:16:37.135 "trsvcid": "4420" 00:16:37.135 }, 00:16:37.135 "peer_address": { 00:16:37.135 "trtype": "TCP", 00:16:37.135 "adrfam": "IPv4", 00:16:37.135 "traddr": "10.0.0.1", 00:16:37.135 "trsvcid": "38296" 00:16:37.135 }, 00:16:37.135 "auth": { 00:16:37.135 "state": "completed", 00:16:37.135 "digest": "sha512", 00:16:37.135 "dhgroup": "ffdhe8192" 00:16:37.135 } 00:16:37.135 } 00:16:37.135 ]' 00:16:37.135 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.135 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.135 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.135 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:37.135 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.135 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.135 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.135 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.393 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:16:37.393 11:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: --dhchap-ctrl-secret DHHC-1:02:ZjRkYmYwZTJjMDhiYTk3MjcwNDE5MjVhZmE5MWI1ODNmZmVlMWJhYTVmMTBmMDRjgniEUQ==: 00:16:38.327 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.327 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:38.327 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.327 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.327 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.327 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.327 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:38.327 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:38.585 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:38.585 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.585 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.585 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:38.585 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.585 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.585 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.585 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.585 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.585 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.585 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.585 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.585 11:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.517 00:16:39.517 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.517 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.517 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.776 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.776 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.776 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.776 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.776 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.776 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.776 { 00:16:39.776 "cntlid": 141, 00:16:39.776 "qid": 0, 00:16:39.776 "state": "enabled", 00:16:39.776 "thread": "nvmf_tgt_poll_group_000", 00:16:39.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:39.776 "listen_address": { 00:16:39.776 "trtype": "TCP", 00:16:39.776 "adrfam": "IPv4", 00:16:39.776 "traddr": "10.0.0.2", 00:16:39.776 "trsvcid": "4420" 00:16:39.776 }, 00:16:39.776 "peer_address": { 00:16:39.776 "trtype": "TCP", 00:16:39.776 "adrfam": "IPv4", 00:16:39.776 "traddr": "10.0.0.1", 00:16:39.776 "trsvcid": "38322" 00:16:39.776 }, 00:16:39.776 "auth": { 00:16:39.776 "state": "completed", 00:16:39.776 "digest": "sha512", 00:16:39.776 "dhgroup": "ffdhe8192" 00:16:39.776 } 00:16:39.776 } 00:16:39.776 ]' 00:16:39.776 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.776 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.776 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.776 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:39.776 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.776 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.776 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.776 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.034 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:16:40.034 11:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:01:YzRiZTUwN2IwODE4MmZjZWY1YjAyOGYyNDEyMmQ1NjfbS+SE: 00:16:40.968 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.968 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:40.968 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.968 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.968 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.968 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.968 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:40.968 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:41.226 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:41.226 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.226 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.226 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:41.226 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:41.226 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.226 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:41.226 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.226 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.226 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.226 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:41.226 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.226 11:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.159 00:16:42.159 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.159 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.159 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.418 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.418 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.418 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.418 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.418 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.418 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.418 { 00:16:42.418 "cntlid": 143, 00:16:42.418 "qid": 0, 00:16:42.418 "state": "enabled", 00:16:42.418 "thread": "nvmf_tgt_poll_group_000", 00:16:42.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:42.418 "listen_address": { 00:16:42.418 "trtype": "TCP", 00:16:42.418 "adrfam": "IPv4", 00:16:42.418 "traddr": "10.0.0.2", 00:16:42.418 "trsvcid": "4420" 00:16:42.418 }, 00:16:42.418 "peer_address": { 00:16:42.418 "trtype": "TCP", 00:16:42.418 "adrfam": "IPv4", 00:16:42.418 "traddr": "10.0.0.1", 00:16:42.418 "trsvcid": "38356" 00:16:42.418 }, 00:16:42.418 "auth": { 00:16:42.418 "state": "completed", 00:16:42.418 "digest": "sha512", 00:16:42.418 "dhgroup": "ffdhe8192" 00:16:42.418 } 00:16:42.418 } 00:16:42.418 ]' 00:16:42.418 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.418 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.418 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.418 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:42.418 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.676 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.676 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.676 11:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.934 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:16:42.934 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:16:43.866 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.866 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:43.866 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.866 11:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.866 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.866 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:43.866 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:43.866 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:43.867 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:43.867 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:43.867 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:43.867 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:43.867 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.125 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.125 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:44.125 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:44.125 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.125 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.125 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.125 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.125 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.125 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.125 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.125 11:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.057 00:16:45.057 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.057 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.057 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.057 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.057 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.057 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.057 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.057 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.057 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.057 { 00:16:45.057 "cntlid": 145, 00:16:45.057 "qid": 0, 00:16:45.057 "state": "enabled", 00:16:45.057 "thread": "nvmf_tgt_poll_group_000", 00:16:45.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:45.057 "listen_address": { 00:16:45.057 "trtype": "TCP", 00:16:45.057 "adrfam": "IPv4", 00:16:45.057 "traddr": "10.0.0.2", 00:16:45.057 "trsvcid": "4420" 00:16:45.057 }, 00:16:45.057 "peer_address": { 00:16:45.057 "trtype": "TCP", 00:16:45.057 "adrfam": "IPv4", 00:16:45.057 "traddr": "10.0.0.1", 00:16:45.057 "trsvcid": "38394" 00:16:45.057 }, 00:16:45.057 "auth": { 00:16:45.057 "state": "completed", 00:16:45.057 "digest": "sha512", 00:16:45.057 "dhgroup": "ffdhe8192" 00:16:45.057 } 00:16:45.057 } 00:16:45.057 ]' 00:16:45.057 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.315 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.315 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.315 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:45.315 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.315 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.315 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.315 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.573 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:16:45.573 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:YTBhN2RkOWI4YmE5ODhiYzBiZThkNTBkN2ZkYzA1YzA5ZDQxYTJhMTQ2MmExZjIz+G2ndw==: --dhchap-ctrl-secret DHHC-1:03:MGQ1NGY1M2IzMjlmNGU1NTNkYzE0MmMyODllNmUwYWQ2MGFiNmI3NzE0MjhlN2ZiMzg1NGJlYzc5MGU4NWY3N7YrDoM=: 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:46.508 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:47.442 request: 00:16:47.442 { 00:16:47.442 "name": "nvme0", 00:16:47.442 "trtype": "tcp", 00:16:47.442 "traddr": "10.0.0.2", 00:16:47.442 "adrfam": "ipv4", 00:16:47.442 "trsvcid": "4420", 00:16:47.442 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:47.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:47.442 "prchk_reftag": false, 00:16:47.442 "prchk_guard": false, 00:16:47.442 "hdgst": false, 00:16:47.442 "ddgst": false, 00:16:47.442 "dhchap_key": "key2", 00:16:47.442 "allow_unrecognized_csi": false, 00:16:47.442 "method": "bdev_nvme_attach_controller", 00:16:47.442 "req_id": 1 00:16:47.442 } 00:16:47.442 Got JSON-RPC error response 00:16:47.442 response: 00:16:47.442 { 00:16:47.442 "code": -5, 00:16:47.442 "message": "Input/output error" 00:16:47.442 } 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:47.442 11:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:48.377 request: 00:16:48.377 { 00:16:48.377 "name": "nvme0", 00:16:48.377 "trtype": "tcp", 00:16:48.377 "traddr": "10.0.0.2", 00:16:48.377 "adrfam": "ipv4", 00:16:48.377 "trsvcid": "4420", 00:16:48.377 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:48.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:48.377 "prchk_reftag": false, 00:16:48.377 "prchk_guard": false, 00:16:48.377 "hdgst": false, 00:16:48.377 "ddgst": false, 00:16:48.377 "dhchap_key": "key1", 00:16:48.377 "dhchap_ctrlr_key": "ckey2", 00:16:48.377 "allow_unrecognized_csi": false, 00:16:48.377 "method": "bdev_nvme_attach_controller", 00:16:48.377 "req_id": 1 00:16:48.377 } 00:16:48.377 Got JSON-RPC error response 00:16:48.377 response: 00:16:48.377 { 00:16:48.377 "code": -5, 00:16:48.377 "message": "Input/output error" 00:16:48.377 } 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.377 11:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.941 request: 00:16:48.941 { 00:16:48.941 "name": "nvme0", 00:16:48.941 "trtype": "tcp", 00:16:48.941 "traddr": "10.0.0.2", 00:16:48.941 "adrfam": "ipv4", 00:16:48.941 "trsvcid": "4420", 00:16:48.941 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:48.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:48.941 "prchk_reftag": false, 00:16:48.941 "prchk_guard": false, 00:16:48.941 "hdgst": false, 00:16:48.941 "ddgst": false, 00:16:48.941 "dhchap_key": "key1", 00:16:48.941 "dhchap_ctrlr_key": "ckey1", 00:16:48.941 "allow_unrecognized_csi": false, 00:16:48.941 "method": "bdev_nvme_attach_controller", 00:16:48.941 "req_id": 1 00:16:48.941 } 00:16:48.941 Got JSON-RPC error response 00:16:48.941 response: 00:16:48.941 { 00:16:48.941 "code": -5, 00:16:48.941 "message": "Input/output error" 00:16:48.941 } 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2913097 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2913097 ']' 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2913097 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2913097 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2913097' 00:16:48.941 killing process with pid 2913097 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2913097 00:16:48.941 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2913097 00:16:49.199 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:49.199 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:49.199 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:49.199 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.199 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2935787 00:16:49.199 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:49.199 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2935787 00:16:49.199 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2935787 ']' 00:16:49.199 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.199 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.199 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.199 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.199 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.456 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.456 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:49.456 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:49.456 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:49.456 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.456 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.456 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:49.456 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2935787 00:16:49.456 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2935787 ']' 00:16:49.456 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.456 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.456 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.456 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.456 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.022 null0 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.7WX 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.3a3 ]] 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3a3 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.nOT 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.XpO ]] 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XpO 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.7rw 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.o35 ]] 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.o35 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.iii 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.022 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.393 nvme0n1 00:16:51.393 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.393 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.393 11:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.959 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.959 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.959 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.959 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.959 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.959 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.959 { 00:16:51.959 "cntlid": 1, 00:16:51.959 "qid": 0, 00:16:51.959 "state": "enabled", 00:16:51.959 "thread": "nvmf_tgt_poll_group_000", 00:16:51.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:51.959 "listen_address": { 00:16:51.959 "trtype": "TCP", 00:16:51.959 "adrfam": "IPv4", 00:16:51.959 "traddr": "10.0.0.2", 00:16:51.959 "trsvcid": "4420" 00:16:51.959 }, 00:16:51.959 "peer_address": { 00:16:51.959 "trtype": "TCP", 00:16:51.959 "adrfam": "IPv4", 00:16:51.959 "traddr": "10.0.0.1", 00:16:51.959 "trsvcid": "35206" 00:16:51.959 }, 00:16:51.959 "auth": { 00:16:51.959 "state": "completed", 00:16:51.959 "digest": "sha512", 00:16:51.959 "dhgroup": "ffdhe8192" 00:16:51.959 } 00:16:51.959 } 00:16:51.959 ]' 00:16:51.959 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.959 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.959 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.959 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.959 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.959 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.959 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.959 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.217 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:16:52.217 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:16:53.149 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.150 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:53.150 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.150 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.150 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.150 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:53.150 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.150 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.150 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.150 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:53.150 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:53.408 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:53.408 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:53.408 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:53.408 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:53.408 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.408 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:53.408 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.408 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.408 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.408 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.666 request: 00:16:53.666 { 00:16:53.666 "name": "nvme0", 00:16:53.666 "trtype": "tcp", 00:16:53.666 "traddr": "10.0.0.2", 00:16:53.666 "adrfam": "ipv4", 00:16:53.666 "trsvcid": "4420", 00:16:53.666 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:53.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:53.666 "prchk_reftag": false, 00:16:53.666 "prchk_guard": false, 00:16:53.666 "hdgst": false, 00:16:53.666 "ddgst": false, 00:16:53.666 "dhchap_key": "key3", 00:16:53.666 "allow_unrecognized_csi": false, 00:16:53.666 "method": "bdev_nvme_attach_controller", 00:16:53.666 "req_id": 1 00:16:53.666 } 00:16:53.666 Got JSON-RPC error response 00:16:53.666 response: 00:16:53.666 { 00:16:53.666 "code": -5, 00:16:53.666 "message": "Input/output error" 00:16:53.666 } 00:16:53.666 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:53.666 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:53.666 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:53.666 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:53.666 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:53.666 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:53.666 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:53.666 11:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:53.925 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:53.925 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:53.925 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:53.925 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:53.925 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.925 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:53.925 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.925 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.925 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.925 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.182 request: 00:16:54.182 { 00:16:54.182 "name": "nvme0", 00:16:54.182 "trtype": "tcp", 00:16:54.182 "traddr": "10.0.0.2", 00:16:54.182 "adrfam": "ipv4", 00:16:54.182 "trsvcid": "4420", 00:16:54.182 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:54.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:54.182 "prchk_reftag": false, 00:16:54.182 "prchk_guard": false, 00:16:54.182 "hdgst": false, 00:16:54.182 "ddgst": false, 00:16:54.182 "dhchap_key": "key3", 00:16:54.182 "allow_unrecognized_csi": false, 00:16:54.182 "method": "bdev_nvme_attach_controller", 00:16:54.182 "req_id": 1 00:16:54.182 } 00:16:54.182 Got JSON-RPC error response 00:16:54.182 response: 00:16:54.182 { 00:16:54.182 "code": -5, 00:16:54.182 "message": "Input/output error" 00:16:54.182 } 00:16:54.182 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:54.182 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:54.182 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:54.182 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:54.182 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:54.182 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:54.182 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:54.182 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:54.182 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:54.182 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:54.748 11:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:55.314 request: 00:16:55.314 { 00:16:55.314 "name": "nvme0", 00:16:55.314 "trtype": "tcp", 00:16:55.314 "traddr": "10.0.0.2", 00:16:55.314 "adrfam": "ipv4", 00:16:55.314 "trsvcid": "4420", 00:16:55.314 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:55.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:55.314 "prchk_reftag": false, 00:16:55.314 "prchk_guard": false, 00:16:55.314 "hdgst": false, 00:16:55.314 "ddgst": false, 00:16:55.314 "dhchap_key": "key0", 00:16:55.314 "dhchap_ctrlr_key": "key1", 00:16:55.314 "allow_unrecognized_csi": false, 00:16:55.314 "method": "bdev_nvme_attach_controller", 00:16:55.314 "req_id": 1 00:16:55.314 } 00:16:55.314 Got JSON-RPC error response 00:16:55.314 response: 00:16:55.314 { 00:16:55.314 "code": -5, 00:16:55.314 "message": "Input/output error" 00:16:55.314 } 00:16:55.314 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:55.314 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:55.314 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:55.314 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:55.314 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:55.314 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:55.314 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:55.572 nvme0n1 00:16:55.572 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:55.572 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.572 11:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:55.831 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.831 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.831 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.089 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:16:56.089 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.089 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.089 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.089 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:56.089 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:56.089 11:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:57.463 nvme0n1 00:16:57.463 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:57.463 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:57.463 11:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.722 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.722 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:57.722 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.722 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.722 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.722 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:57.722 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:57.722 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.981 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.981 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:16:57.981 11:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: --dhchap-ctrl-secret DHHC-1:03:NzRlN2FlMTkzMDg5NDk1YTMxM2RiYTdmNjcwMjdhYjJlMmMxODlmNDkxNGQ2OWI0ODkwYzdkODczYTBkMTc3NdxjnIM=: 00:16:58.915 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:58.915 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:58.915 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:58.915 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:58.915 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:58.915 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:58.915 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:58.915 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.915 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.177 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:59.177 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:59.177 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:59.177 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:59.177 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.177 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:59.177 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.177 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:59.177 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:59.177 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:00.112 request: 00:17:00.112 { 00:17:00.112 "name": "nvme0", 00:17:00.112 "trtype": "tcp", 00:17:00.112 "traddr": "10.0.0.2", 00:17:00.112 "adrfam": "ipv4", 00:17:00.112 "trsvcid": "4420", 00:17:00.112 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:00.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:00.112 "prchk_reftag": false, 00:17:00.112 "prchk_guard": false, 00:17:00.112 "hdgst": false, 00:17:00.112 "ddgst": false, 00:17:00.112 "dhchap_key": "key1", 00:17:00.112 "allow_unrecognized_csi": false, 00:17:00.112 "method": "bdev_nvme_attach_controller", 00:17:00.112 "req_id": 1 00:17:00.112 } 00:17:00.112 Got JSON-RPC error response 00:17:00.112 response: 00:17:00.112 { 00:17:00.112 "code": -5, 00:17:00.112 "message": "Input/output error" 00:17:00.112 } 00:17:00.112 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:00.112 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:00.112 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:00.112 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:00.112 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:00.112 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:00.112 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:01.486 nvme0n1 00:17:01.486 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:01.486 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:01.486 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.744 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.744 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.744 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.002 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:02.002 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.002 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.002 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.002 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:02.002 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:02.002 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:02.260 nvme0n1 00:17:02.260 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:02.260 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:02.260 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.518 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.518 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.518 11:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.775 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:02.775 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.775 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.775 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.775 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: '' 2s 00:17:02.775 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:02.775 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:02.776 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: 00:17:02.776 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:02.776 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:02.776 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:02.776 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: ]] 00:17:02.776 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZTRjYmM1ZWM0NjBiNTU2NGVlMTE2ZGNmZWFjN2Y3MTbxlN5Q: 00:17:02.776 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:02.776 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:02.776 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:05.302 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:05.302 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:05.302 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:05.302 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:05.302 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:05.302 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:05.302 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:05.302 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:05.302 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.302 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.302 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.302 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: 2s 00:17:05.302 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:05.302 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:05.302 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:05.302 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: 00:17:05.303 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:05.303 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:05.303 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:05.303 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: ]] 00:17:05.303 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YjlhMmYzMjMzM2Y2NDg5MDQyZjQzOTNhODdjNGUxZmFjZDNhYjY1MzVlMTk0MjUxuw0TwQ==: 00:17:05.303 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:05.303 11:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:06.808 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:06.808 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:06.808 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:06.808 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:06.808 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:06.808 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:07.066 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:07.066 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.066 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:07.066 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.066 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.066 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.066 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:07.066 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:07.066 11:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:08.439 nvme0n1 00:17:08.439 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:08.439 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.439 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.439 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.439 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:08.439 11:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:09.372 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:09.372 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:09.372 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.372 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.372 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:09.372 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.372 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.372 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.372 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:09.372 11:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:09.631 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:09.631 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:09.631 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.196 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.196 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:10.196 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.196 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.196 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.196 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:10.196 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:10.196 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:10.196 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:10.196 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.196 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:10.196 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.196 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:10.196 11:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:10.761 request: 00:17:10.761 { 00:17:10.761 "name": "nvme0", 00:17:10.761 "dhchap_key": "key1", 00:17:10.761 "dhchap_ctrlr_key": "key3", 00:17:10.761 "method": "bdev_nvme_set_keys", 00:17:10.761 "req_id": 1 00:17:10.761 } 00:17:10.761 Got JSON-RPC error response 00:17:10.761 response: 00:17:10.761 { 00:17:10.761 "code": -13, 00:17:10.761 "message": "Permission denied" 00:17:10.761 } 00:17:10.761 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:10.761 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.761 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.761 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.761 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:10.761 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:10.761 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.019 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:11.019 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:12.392 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:12.392 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.392 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:12.392 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:12.392 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:12.392 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.392 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.392 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.392 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:12.392 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:12.392 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:13.766 nvme0n1 00:17:13.766 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:13.766 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.766 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.766 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.766 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:13.766 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:13.766 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:13.766 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:13.766 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.766 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:13.766 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:13.766 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:13.766 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:14.699 request: 00:17:14.699 { 00:17:14.699 "name": "nvme0", 00:17:14.699 "dhchap_key": "key2", 00:17:14.699 "dhchap_ctrlr_key": "key0", 00:17:14.699 "method": "bdev_nvme_set_keys", 00:17:14.699 "req_id": 1 00:17:14.699 } 00:17:14.699 Got JSON-RPC error response 00:17:14.699 response: 00:17:14.699 { 00:17:14.699 "code": -13, 00:17:14.699 "message": "Permission denied" 00:17:14.699 } 00:17:14.699 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:14.699 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:14.699 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:14.699 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:14.699 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:14.699 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:14.699 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.957 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:14.957 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:15.891 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:15.891 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:15.891 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.148 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:16.148 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:16.148 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:16.148 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2913122 00:17:16.148 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2913122 ']' 00:17:16.148 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2913122 00:17:16.149 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:16.149 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.149 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2913122 00:17:16.149 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:16.149 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:16.149 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2913122' 00:17:16.149 killing process with pid 2913122 00:17:16.149 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2913122 00:17:16.149 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2913122 00:17:16.713 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:16.713 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:16.713 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:16.713 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:16.713 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:16.713 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:16.713 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:16.713 rmmod nvme_tcp 00:17:16.713 rmmod nvme_fabrics 00:17:16.713 rmmod nvme_keyring 00:17:16.713 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:16.713 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:16.713 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:16.713 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2935787 ']' 00:17:16.713 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2935787 00:17:16.713 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2935787 ']' 00:17:16.713 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2935787 00:17:16.713 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:16.713 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.713 11:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2935787 00:17:16.713 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.713 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.713 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2935787' 00:17:16.713 killing process with pid 2935787 00:17:16.713 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2935787 00:17:16.713 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2935787 00:17:16.971 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:16.971 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:16.971 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:16.971 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:16.971 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:16.971 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:16.971 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:16.971 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:16.971 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:16.971 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.971 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:16.971 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.880 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:18.880 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.7WX /tmp/spdk.key-sha256.nOT /tmp/spdk.key-sha384.7rw /tmp/spdk.key-sha512.iii /tmp/spdk.key-sha512.3a3 /tmp/spdk.key-sha384.XpO /tmp/spdk.key-sha256.o35 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:18.880 00:17:18.880 real 3m31.444s 00:17:18.880 user 8m16.921s 00:17:18.880 sys 0m27.913s 00:17:18.880 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:18.880 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.880 ************************************ 00:17:18.880 END TEST nvmf_auth_target 00:17:18.880 ************************************ 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:19.140 ************************************ 00:17:19.140 START TEST nvmf_bdevio_no_huge 00:17:19.140 ************************************ 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:19.140 * Looking for test storage... 00:17:19.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:19.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.140 --rc genhtml_branch_coverage=1 00:17:19.140 --rc genhtml_function_coverage=1 00:17:19.140 --rc genhtml_legend=1 00:17:19.140 --rc geninfo_all_blocks=1 00:17:19.140 --rc geninfo_unexecuted_blocks=1 00:17:19.140 00:17:19.140 ' 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:19.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.140 --rc genhtml_branch_coverage=1 00:17:19.140 --rc genhtml_function_coverage=1 00:17:19.140 --rc genhtml_legend=1 00:17:19.140 --rc geninfo_all_blocks=1 00:17:19.140 --rc geninfo_unexecuted_blocks=1 00:17:19.140 00:17:19.140 ' 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:19.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.140 --rc genhtml_branch_coverage=1 00:17:19.140 --rc genhtml_function_coverage=1 00:17:19.140 --rc genhtml_legend=1 00:17:19.140 --rc geninfo_all_blocks=1 00:17:19.140 --rc geninfo_unexecuted_blocks=1 00:17:19.140 00:17:19.140 ' 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:19.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.140 --rc genhtml_branch_coverage=1 00:17:19.140 --rc genhtml_function_coverage=1 00:17:19.140 --rc genhtml_legend=1 00:17:19.140 --rc geninfo_all_blocks=1 00:17:19.140 --rc geninfo_unexecuted_blocks=1 00:17:19.140 00:17:19.140 ' 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.140 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:19.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:19.141 11:35:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:21.108 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:21.108 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:21.108 Found net devices under 0000:09:00.0: cvl_0_0 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:21.108 Found net devices under 0000:09:00.1: cvl_0_1 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:21.108 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:21.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:17:21.367 00:17:21.367 --- 10.0.0.2 ping statistics --- 00:17:21.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.367 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:21.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:17:21.367 00:17:21.367 --- 10.0.0.1 ping statistics --- 00:17:21.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.367 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2941156 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2941156 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2941156 ']' 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.367 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.367 [2024-11-15 11:36:01.648689] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:17:21.367 [2024-11-15 11:36:01.648778] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:21.367 [2024-11-15 11:36:01.728947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:21.367 [2024-11-15 11:36:01.790061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.367 [2024-11-15 11:36:01.790109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.367 [2024-11-15 11:36:01.790124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.367 [2024-11-15 11:36:01.790136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.367 [2024-11-15 11:36:01.790147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.367 [2024-11-15 11:36:01.791221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:21.367 [2024-11-15 11:36:01.791300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:21.367 [2024-11-15 11:36:01.791365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:21.367 [2024-11-15 11:36:01.791397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:21.624 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.624 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:21.624 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:21.624 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:21.624 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.624 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.624 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:21.624 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.624 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.624 [2024-11-15 11:36:01.946880] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.625 Malloc0 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.625 [2024-11-15 11:36:01.985165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:21.625 { 00:17:21.625 "params": { 00:17:21.625 "name": "Nvme$subsystem", 00:17:21.625 "trtype": "$TEST_TRANSPORT", 00:17:21.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.625 "adrfam": "ipv4", 00:17:21.625 "trsvcid": "$NVMF_PORT", 00:17:21.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.625 "hdgst": ${hdgst:-false}, 00:17:21.625 "ddgst": ${ddgst:-false} 00:17:21.625 }, 00:17:21.625 "method": "bdev_nvme_attach_controller" 00:17:21.625 } 00:17:21.625 EOF 00:17:21.625 )") 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:21.625 11:36:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:21.625 "params": { 00:17:21.625 "name": "Nvme1", 00:17:21.625 "trtype": "tcp", 00:17:21.625 "traddr": "10.0.0.2", 00:17:21.625 "adrfam": "ipv4", 00:17:21.625 "trsvcid": "4420", 00:17:21.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.625 "hdgst": false, 00:17:21.625 "ddgst": false 00:17:21.625 }, 00:17:21.625 "method": "bdev_nvme_attach_controller" 00:17:21.625 }' 00:17:21.625 [2024-11-15 11:36:02.034204] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:17:21.625 [2024-11-15 11:36:02.034275] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2941206 ] 00:17:21.882 [2024-11-15 11:36:02.108004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:21.882 [2024-11-15 11:36:02.174170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.882 [2024-11-15 11:36:02.174219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.882 [2024-11-15 11:36:02.174222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.138 I/O targets: 00:17:22.138 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:22.138 00:17:22.138 00:17:22.138 CUnit - A unit testing framework for C - Version 2.1-3 00:17:22.138 http://cunit.sourceforge.net/ 00:17:22.138 00:17:22.138 00:17:22.138 Suite: bdevio tests on: Nvme1n1 00:17:22.395 Test: blockdev write read block ...passed 00:17:22.395 Test: blockdev write zeroes read block ...passed 00:17:22.395 Test: blockdev write zeroes read no split ...passed 00:17:22.395 Test: blockdev write zeroes read split ...passed 00:17:22.395 Test: blockdev write zeroes read split partial ...passed 00:17:22.395 Test: blockdev reset ...[2024-11-15 11:36:02.652587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:22.395 [2024-11-15 11:36:02.652702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15b66e0 (9): Bad file descriptor 00:17:22.395 [2024-11-15 11:36:02.669535] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:22.395 passed 00:17:22.395 Test: blockdev write read 8 blocks ...passed 00:17:22.395 Test: blockdev write read size > 128k ...passed 00:17:22.395 Test: blockdev write read invalid size ...passed 00:17:22.395 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:22.395 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:22.395 Test: blockdev write read max offset ...passed 00:17:22.395 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:22.652 Test: blockdev writev readv 8 blocks ...passed 00:17:22.652 Test: blockdev writev readv 30 x 1block ...passed 00:17:22.652 Test: blockdev writev readv block ...passed 00:17:22.652 Test: blockdev writev readv size > 128k ...passed 00:17:22.652 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:22.652 Test: blockdev comparev and writev ...[2024-11-15 11:36:02.923563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.652 [2024-11-15 11:36:02.923599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:22.652 [2024-11-15 11:36:02.923624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.652 [2024-11-15 11:36:02.923642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:22.652 [2024-11-15 11:36:02.923992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.652 [2024-11-15 11:36:02.924018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:22.652 [2024-11-15 11:36:02.924039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.653 [2024-11-15 11:36:02.924056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:22.653 [2024-11-15 11:36:02.924395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.653 [2024-11-15 11:36:02.924421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:22.653 [2024-11-15 11:36:02.924443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.653 [2024-11-15 11:36:02.924458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:22.653 [2024-11-15 11:36:02.924807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.653 [2024-11-15 11:36:02.924831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:22.653 [2024-11-15 11:36:02.924852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:22.653 [2024-11-15 11:36:02.924868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:22.653 passed 00:17:22.653 Test: blockdev nvme passthru rw ...passed 00:17:22.653 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:36:03.007538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.653 [2024-11-15 11:36:03.007567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:22.653 [2024-11-15 11:36:03.007712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.653 [2024-11-15 11:36:03.007736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:22.653 [2024-11-15 11:36:03.007875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.653 [2024-11-15 11:36:03.007898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:22.653 [2024-11-15 11:36:03.008032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:22.653 [2024-11-15 11:36:03.008055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:22.653 passed 00:17:22.653 Test: blockdev nvme admin passthru ...passed 00:17:22.653 Test: blockdev copy ...passed 00:17:22.653 00:17:22.653 Run Summary: Type Total Ran Passed Failed Inactive 00:17:22.653 suites 1 1 n/a 0 0 00:17:22.653 tests 23 23 23 0 0 00:17:22.653 asserts 152 152 152 0 n/a 00:17:22.653 00:17:22.653 Elapsed time = 1.064 seconds 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:23.218 rmmod nvme_tcp 00:17:23.218 rmmod nvme_fabrics 00:17:23.218 rmmod nvme_keyring 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2941156 ']' 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2941156 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2941156 ']' 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2941156 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2941156 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2941156' 00:17:23.218 killing process with pid 2941156 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2941156 00:17:23.218 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2941156 00:17:23.783 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:23.783 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:23.783 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:23.783 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:23.783 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:23.783 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:23.783 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:23.783 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:23.783 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:23.783 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.783 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.783 11:36:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.684 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:25.684 00:17:25.684 real 0m6.634s 00:17:25.684 user 0m11.534s 00:17:25.684 sys 0m2.517s 00:17:25.684 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.684 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:25.684 ************************************ 00:17:25.684 END TEST nvmf_bdevio_no_huge 00:17:25.684 ************************************ 00:17:25.684 11:36:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:25.684 11:36:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:25.684 11:36:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.684 11:36:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:25.684 ************************************ 00:17:25.684 START TEST nvmf_tls 00:17:25.684 ************************************ 00:17:25.684 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:25.684 * Looking for test storage... 00:17:25.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:25.684 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:25.684 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:25.684 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:25.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.943 --rc genhtml_branch_coverage=1 00:17:25.943 --rc genhtml_function_coverage=1 00:17:25.943 --rc genhtml_legend=1 00:17:25.943 --rc geninfo_all_blocks=1 00:17:25.943 --rc geninfo_unexecuted_blocks=1 00:17:25.943 00:17:25.943 ' 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:25.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.943 --rc genhtml_branch_coverage=1 00:17:25.943 --rc genhtml_function_coverage=1 00:17:25.943 --rc genhtml_legend=1 00:17:25.943 --rc geninfo_all_blocks=1 00:17:25.943 --rc geninfo_unexecuted_blocks=1 00:17:25.943 00:17:25.943 ' 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:25.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.943 --rc genhtml_branch_coverage=1 00:17:25.943 --rc genhtml_function_coverage=1 00:17:25.943 --rc genhtml_legend=1 00:17:25.943 --rc geninfo_all_blocks=1 00:17:25.943 --rc geninfo_unexecuted_blocks=1 00:17:25.943 00:17:25.943 ' 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:25.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.943 --rc genhtml_branch_coverage=1 00:17:25.943 --rc genhtml_function_coverage=1 00:17:25.943 --rc genhtml_legend=1 00:17:25.943 --rc geninfo_all_blocks=1 00:17:25.943 --rc geninfo_unexecuted_blocks=1 00:17:25.943 00:17:25.943 ' 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:25.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:25.943 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:25.944 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.944 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:25.944 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:25.944 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:25.944 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.944 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.944 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.944 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:25.944 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:25.944 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:25.944 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:28.475 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:28.475 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:28.475 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:28.475 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:28.475 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:28.475 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:28.475 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:28.475 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:28.475 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:28.475 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:28.475 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:28.475 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:28.475 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:28.476 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:28.476 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:28.476 Found net devices under 0000:09:00.0: cvl_0_0 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:28.476 Found net devices under 0000:09:00.1: cvl_0_1 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:28.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:17:28.476 00:17:28.476 --- 10.0.0.2 ping statistics --- 00:17:28.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.476 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:28.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:17:28.476 00:17:28.476 --- 10.0.0.1 ping statistics --- 00:17:28.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.476 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2943788 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2943788 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:28.476 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2943788 ']' 00:17:28.477 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.477 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.477 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.477 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.477 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:28.477 [2024-11-15 11:36:08.533201] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:17:28.477 [2024-11-15 11:36:08.533290] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.477 [2024-11-15 11:36:08.608753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.477 [2024-11-15 11:36:08.668855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.477 [2024-11-15 11:36:08.668924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.477 [2024-11-15 11:36:08.668960] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.477 [2024-11-15 11:36:08.668972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.477 [2024-11-15 11:36:08.668982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.477 [2024-11-15 11:36:08.669619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.477 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.477 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:28.477 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:28.477 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:28.477 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:28.477 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.477 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:28.477 11:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:28.735 true 00:17:28.735 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:28.735 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:28.993 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:28.993 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:28.993 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:29.251 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:29.251 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:29.509 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:29.509 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:29.509 11:36:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:29.767 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:29.767 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:30.331 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:30.331 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:30.331 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:30.331 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:30.331 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:30.331 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:30.331 11:36:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:30.588 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:30.588 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:31.154 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:31.154 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:31.154 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:31.154 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:31.154 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:31.412 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:31.412 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:31.412 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:31.412 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:31.412 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:31.412 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:31.412 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:31.412 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:31.412 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.kQWJGLj1az 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.kkV2MsUGGx 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.kQWJGLj1az 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.kkV2MsUGGx 00:17:31.671 11:36:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:31.931 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:32.222 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.kQWJGLj1az 00:17:32.222 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.kQWJGLj1az 00:17:32.222 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:32.514 [2024-11-15 11:36:12.829046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.514 11:36:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:32.772 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:33.029 [2024-11-15 11:36:13.350435] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:33.029 [2024-11-15 11:36:13.350663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.029 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:33.287 malloc0 00:17:33.287 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:33.544 11:36:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.kQWJGLj1az 00:17:33.802 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:34.060 11:36:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.kQWJGLj1az 00:17:46.252 Initializing NVMe Controllers 00:17:46.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:46.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:46.252 Initialization complete. Launching workers. 00:17:46.252 ======================================================== 00:17:46.252 Latency(us) 00:17:46.252 Device Information : IOPS MiB/s Average min max 00:17:46.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8565.44 33.46 7473.93 1042.94 9355.67 00:17:46.252 ======================================================== 00:17:46.252 Total : 8565.44 33.46 7473.93 1042.94 9355.67 00:17:46.252 00:17:46.252 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kQWJGLj1az 00:17:46.252 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:46.252 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:46.252 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:46.252 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kQWJGLj1az 00:17:46.252 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:46.252 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2945813 00:17:46.252 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:46.252 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:46.252 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2945813 /var/tmp/bdevperf.sock 00:17:46.252 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2945813 ']' 00:17:46.252 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:46.252 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.252 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:46.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:46.253 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.253 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.253 [2024-11-15 11:36:24.601891] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:17:46.253 [2024-11-15 11:36:24.601973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2945813 ] 00:17:46.253 [2024-11-15 11:36:24.667116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.253 [2024-11-15 11:36:24.723872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.253 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.253 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:46.253 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kQWJGLj1az 00:17:46.253 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:46.253 [2024-11-15 11:36:25.414799] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:46.253 TLSTESTn1 00:17:46.253 11:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:46.253 Running I/O for 10 seconds... 00:17:47.623 3308.00 IOPS, 12.92 MiB/s [2024-11-15T10:36:28.981Z] 3335.00 IOPS, 13.03 MiB/s [2024-11-15T10:36:29.913Z] 3353.00 IOPS, 13.10 MiB/s [2024-11-15T10:36:30.844Z] 3364.50 IOPS, 13.14 MiB/s [2024-11-15T10:36:31.776Z] 3364.80 IOPS, 13.14 MiB/s [2024-11-15T10:36:32.708Z] 3376.83 IOPS, 13.19 MiB/s [2024-11-15T10:36:34.078Z] 3387.43 IOPS, 13.23 MiB/s [2024-11-15T10:36:34.692Z] 3385.62 IOPS, 13.23 MiB/s [2024-11-15T10:36:36.064Z] 3396.44 IOPS, 13.27 MiB/s [2024-11-15T10:36:36.064Z] 3390.70 IOPS, 13.24 MiB/s 00:17:55.637 Latency(us) 00:17:55.637 [2024-11-15T10:36:36.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.637 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:55.637 Verification LBA range: start 0x0 length 0x2000 00:17:55.637 TLSTESTn1 : 10.02 3397.69 13.27 0.00 0.00 37613.93 5509.88 46215.02 00:17:55.637 [2024-11-15T10:36:36.064Z] =================================================================================================================== 00:17:55.637 [2024-11-15T10:36:36.064Z] Total : 3397.69 13.27 0.00 0.00 37613.93 5509.88 46215.02 00:17:55.637 { 00:17:55.637 "results": [ 00:17:55.637 { 00:17:55.637 "job": "TLSTESTn1", 00:17:55.637 "core_mask": "0x4", 00:17:55.637 "workload": "verify", 00:17:55.637 "status": "finished", 00:17:55.637 "verify_range": { 00:17:55.637 "start": 0, 00:17:55.637 "length": 8192 00:17:55.637 }, 00:17:55.637 "queue_depth": 128, 00:17:55.637 "io_size": 4096, 00:17:55.637 "runtime": 10.016209, 00:17:55.637 "iops": 3397.692679935093, 00:17:55.637 "mibps": 13.272237030996457, 00:17:55.637 "io_failed": 0, 00:17:55.637 "io_timeout": 0, 00:17:55.637 "avg_latency_us": 37613.92839436521, 00:17:55.637 "min_latency_us": 5509.878518518519, 00:17:55.637 "max_latency_us": 46215.01629629629 00:17:55.637 } 00:17:55.637 ], 00:17:55.637 "core_count": 1 00:17:55.637 } 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2945813 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2945813 ']' 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2945813 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2945813 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2945813' 00:17:55.637 killing process with pid 2945813 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2945813 00:17:55.637 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.637 00:17:55.637 Latency(us) 00:17:55.637 [2024-11-15T10:36:36.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.637 [2024-11-15T10:36:36.064Z] =================================================================================================================== 00:17:55.637 [2024-11-15T10:36:36.064Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2945813 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kkV2MsUGGx 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kkV2MsUGGx 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kkV2MsUGGx 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kkV2MsUGGx 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2947134 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2947134 /var/tmp/bdevperf.sock 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2947134 ']' 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.637 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.637 [2024-11-15 11:36:36.007546] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:17:55.637 [2024-11-15 11:36:36.007642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2947134 ] 00:17:55.895 [2024-11-15 11:36:36.076532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.895 [2024-11-15 11:36:36.131892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.895 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.895 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:55.895 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kkV2MsUGGx 00:17:56.153 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:56.410 [2024-11-15 11:36:36.766367] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:56.410 [2024-11-15 11:36:36.776061] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:56.410 [2024-11-15 11:36:36.776439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6022c0 (107): Transport endpoint is not connected 00:17:56.410 [2024-11-15 11:36:36.777416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6022c0 (9): Bad file descriptor 00:17:56.410 [2024-11-15 11:36:36.778415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:56.410 [2024-11-15 11:36:36.778435] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:56.410 [2024-11-15 11:36:36.778448] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:56.410 [2024-11-15 11:36:36.778467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:56.410 request: 00:17:56.410 { 00:17:56.410 "name": "TLSTEST", 00:17:56.410 "trtype": "tcp", 00:17:56.410 "traddr": "10.0.0.2", 00:17:56.410 "adrfam": "ipv4", 00:17:56.410 "trsvcid": "4420", 00:17:56.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:56.410 "prchk_reftag": false, 00:17:56.410 "prchk_guard": false, 00:17:56.410 "hdgst": false, 00:17:56.410 "ddgst": false, 00:17:56.410 "psk": "key0", 00:17:56.410 "allow_unrecognized_csi": false, 00:17:56.410 "method": "bdev_nvme_attach_controller", 00:17:56.410 "req_id": 1 00:17:56.410 } 00:17:56.410 Got JSON-RPC error response 00:17:56.410 response: 00:17:56.410 { 00:17:56.410 "code": -5, 00:17:56.410 "message": "Input/output error" 00:17:56.410 } 00:17:56.410 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2947134 00:17:56.410 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2947134 ']' 00:17:56.410 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2947134 00:17:56.410 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:56.410 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.410 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2947134 00:17:56.410 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:56.410 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:56.410 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2947134' 00:17:56.410 killing process with pid 2947134 00:17:56.410 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2947134 00:17:56.410 Received shutdown signal, test time was about 10.000000 seconds 00:17:56.410 00:17:56.410 Latency(us) 00:17:56.410 [2024-11-15T10:36:36.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.410 [2024-11-15T10:36:36.837Z] =================================================================================================================== 00:17:56.410 [2024-11-15T10:36:36.837Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:56.410 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2947134 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kQWJGLj1az 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kQWJGLj1az 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kQWJGLj1az 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kQWJGLj1az 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2947277 00:17:56.667 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:56.668 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.668 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2947277 /var/tmp/bdevperf.sock 00:17:56.668 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2947277 ']' 00:17:56.668 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.668 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.668 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.668 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.668 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.668 [2024-11-15 11:36:37.075526] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:17:56.668 [2024-11-15 11:36:37.075625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2947277 ] 00:17:56.924 [2024-11-15 11:36:37.141510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.924 [2024-11-15 11:36:37.198198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.924 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.924 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:56.924 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kQWJGLj1az 00:17:57.181 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:17:57.439 [2024-11-15 11:36:37.840255] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.439 [2024-11-15 11:36:37.848693] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:57.439 [2024-11-15 11:36:37.848722] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:57.439 [2024-11-15 11:36:37.848774] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:57.439 [2024-11-15 11:36:37.849270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209b2c0 (107): Transport endpoint is not connected 00:17:57.439 [2024-11-15 11:36:37.850260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209b2c0 (9): Bad file descriptor 00:17:57.439 [2024-11-15 11:36:37.851259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:57.439 [2024-11-15 11:36:37.851277] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:57.439 [2024-11-15 11:36:37.851311] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:57.439 [2024-11-15 11:36:37.851332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:57.439 request: 00:17:57.439 { 00:17:57.439 "name": "TLSTEST", 00:17:57.439 "trtype": "tcp", 00:17:57.439 "traddr": "10.0.0.2", 00:17:57.439 "adrfam": "ipv4", 00:17:57.439 "trsvcid": "4420", 00:17:57.439 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.439 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:57.439 "prchk_reftag": false, 00:17:57.439 "prchk_guard": false, 00:17:57.439 "hdgst": false, 00:17:57.439 "ddgst": false, 00:17:57.439 "psk": "key0", 00:17:57.439 "allow_unrecognized_csi": false, 00:17:57.439 "method": "bdev_nvme_attach_controller", 00:17:57.439 "req_id": 1 00:17:57.439 } 00:17:57.439 Got JSON-RPC error response 00:17:57.439 response: 00:17:57.439 { 00:17:57.439 "code": -5, 00:17:57.439 "message": "Input/output error" 00:17:57.439 } 00:17:57.697 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2947277 00:17:57.697 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2947277 ']' 00:17:57.697 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2947277 00:17:57.697 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:57.697 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.697 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2947277 00:17:57.697 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:57.697 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:57.697 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2947277' 00:17:57.697 killing process with pid 2947277 00:17:57.697 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2947277 00:17:57.697 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.697 00:17:57.697 Latency(us) 00:17:57.697 [2024-11-15T10:36:38.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.697 [2024-11-15T10:36:38.124Z] =================================================================================================================== 00:17:57.697 [2024-11-15T10:36:38.124Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:57.697 11:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2947277 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kQWJGLj1az 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kQWJGLj1az 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kQWJGLj1az 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kQWJGLj1az 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2947421 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2947421 /var/tmp/bdevperf.sock 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2947421 ']' 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.955 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.955 [2024-11-15 11:36:38.184957] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:17:57.955 [2024-11-15 11:36:38.185044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2947421 ] 00:17:57.955 [2024-11-15 11:36:38.250041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.955 [2024-11-15 11:36:38.305351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.214 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.214 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:58.214 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kQWJGLj1az 00:17:58.471 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:58.730 [2024-11-15 11:36:38.942519] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.730 [2024-11-15 11:36:38.951564] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:58.730 [2024-11-15 11:36:38.951595] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:58.730 [2024-11-15 11:36:38.951647] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:58.730 [2024-11-15 11:36:38.951906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ce2c0 (107): Transport endpoint is not connected 00:17:58.730 [2024-11-15 11:36:38.952896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ce2c0 (9): Bad file descriptor 00:17:58.730 [2024-11-15 11:36:38.953895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:17:58.730 [2024-11-15 11:36:38.953919] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:58.730 [2024-11-15 11:36:38.953947] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:17:58.730 [2024-11-15 11:36:38.953966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:17:58.730 request: 00:17:58.730 { 00:17:58.730 "name": "TLSTEST", 00:17:58.730 "trtype": "tcp", 00:17:58.730 "traddr": "10.0.0.2", 00:17:58.730 "adrfam": "ipv4", 00:17:58.730 "trsvcid": "4420", 00:17:58.730 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:58.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.730 "prchk_reftag": false, 00:17:58.730 "prchk_guard": false, 00:17:58.730 "hdgst": false, 00:17:58.730 "ddgst": false, 00:17:58.730 "psk": "key0", 00:17:58.730 "allow_unrecognized_csi": false, 00:17:58.730 "method": "bdev_nvme_attach_controller", 00:17:58.730 "req_id": 1 00:17:58.730 } 00:17:58.730 Got JSON-RPC error response 00:17:58.730 response: 00:17:58.730 { 00:17:58.730 "code": -5, 00:17:58.730 "message": "Input/output error" 00:17:58.730 } 00:17:58.730 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2947421 00:17:58.730 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2947421 ']' 00:17:58.730 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2947421 00:17:58.730 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:58.730 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.730 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2947421 00:17:58.730 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:58.730 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:58.730 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2947421' 00:17:58.730 killing process with pid 2947421 00:17:58.730 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2947421 00:17:58.730 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.730 00:17:58.730 Latency(us) 00:17:58.730 [2024-11-15T10:36:39.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.730 [2024-11-15T10:36:39.157Z] =================================================================================================================== 00:17:58.730 [2024-11-15T10:36:39.157Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:58.730 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2947421 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2947562 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2947562 /var/tmp/bdevperf.sock 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2947562 ']' 00:17:58.988 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.989 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.989 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.989 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.989 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.989 [2024-11-15 11:36:39.281795] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:17:58.989 [2024-11-15 11:36:39.281879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2947562 ] 00:17:58.989 [2024-11-15 11:36:39.347559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.989 [2024-11-15 11:36:39.402132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.247 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.247 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:59.247 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:17:59.504 [2024-11-15 11:36:39.757165] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:17:59.504 [2024-11-15 11:36:39.757210] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:59.504 request: 00:17:59.504 { 00:17:59.504 "name": "key0", 00:17:59.504 "path": "", 00:17:59.504 "method": "keyring_file_add_key", 00:17:59.504 "req_id": 1 00:17:59.504 } 00:17:59.504 Got JSON-RPC error response 00:17:59.504 response: 00:17:59.504 { 00:17:59.504 "code": -1, 00:17:59.504 "message": "Operation not permitted" 00:17:59.504 } 00:17:59.504 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:59.762 [2024-11-15 11:36:40.038072] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:59.762 [2024-11-15 11:36:40.038138] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:59.762 request: 00:17:59.762 { 00:17:59.762 "name": "TLSTEST", 00:17:59.762 "trtype": "tcp", 00:17:59.762 "traddr": "10.0.0.2", 00:17:59.762 "adrfam": "ipv4", 00:17:59.762 "trsvcid": "4420", 00:17:59.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:59.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:59.762 "prchk_reftag": false, 00:17:59.762 "prchk_guard": false, 00:17:59.762 "hdgst": false, 00:17:59.762 "ddgst": false, 00:17:59.762 "psk": "key0", 00:17:59.762 "allow_unrecognized_csi": false, 00:17:59.762 "method": "bdev_nvme_attach_controller", 00:17:59.762 "req_id": 1 00:17:59.762 } 00:17:59.762 Got JSON-RPC error response 00:17:59.762 response: 00:17:59.762 { 00:17:59.762 "code": -126, 00:17:59.762 "message": "Required key not available" 00:17:59.762 } 00:17:59.762 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2947562 00:17:59.762 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2947562 ']' 00:17:59.762 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2947562 00:17:59.762 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:59.762 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.762 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2947562 00:17:59.762 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:59.762 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:59.762 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2947562' 00:17:59.762 killing process with pid 2947562 00:17:59.762 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2947562 00:17:59.762 Received shutdown signal, test time was about 10.000000 seconds 00:17:59.762 00:17:59.762 Latency(us) 00:17:59.762 [2024-11-15T10:36:40.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.762 [2024-11-15T10:36:40.189Z] =================================================================================================================== 00:17:59.762 [2024-11-15T10:36:40.189Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:59.762 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2947562 00:18:00.019 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:00.019 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:00.019 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:00.019 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:00.019 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:00.019 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2943788 00:18:00.019 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2943788 ']' 00:18:00.019 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2943788 00:18:00.019 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:00.020 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.020 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2943788 00:18:00.020 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:00.020 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:00.020 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2943788' 00:18:00.020 killing process with pid 2943788 00:18:00.020 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2943788 00:18:00.020 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2943788 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.2HXtexgyle 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.2HXtexgyle 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2947716 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2947716 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2947716 ']' 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.277 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.277 [2024-11-15 11:36:40.637680] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:18:00.277 [2024-11-15 11:36:40.637787] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.536 [2024-11-15 11:36:40.708325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.536 [2024-11-15 11:36:40.759757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.536 [2024-11-15 11:36:40.759815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.536 [2024-11-15 11:36:40.759842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.536 [2024-11-15 11:36:40.759852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.536 [2024-11-15 11:36:40.759862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.536 [2024-11-15 11:36:40.760432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.536 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.536 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:00.536 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:00.536 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:00.536 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.536 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.536 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.2HXtexgyle 00:18:00.536 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2HXtexgyle 00:18:00.536 11:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:00.794 [2024-11-15 11:36:41.160999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.794 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:01.052 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:01.311 [2024-11-15 11:36:41.694468] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:01.311 [2024-11-15 11:36:41.694745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.311 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:01.569 malloc0 00:18:01.569 11:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:02.136 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2HXtexgyle 00:18:02.136 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:02.394 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2HXtexgyle 00:18:02.394 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:02.394 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:02.394 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:02.394 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2HXtexgyle 00:18:02.394 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.394 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2948000 00:18:02.394 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.394 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.394 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2948000 /var/tmp/bdevperf.sock 00:18:02.394 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2948000 ']' 00:18:02.394 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.394 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.394 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.394 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.394 11:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.652 [2024-11-15 11:36:42.836530] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:18:02.652 [2024-11-15 11:36:42.836624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2948000 ] 00:18:02.652 [2024-11-15 11:36:42.901759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.652 [2024-11-15 11:36:42.958812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.652 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.652 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:02.652 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2HXtexgyle 00:18:03.218 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:03.218 [2024-11-15 11:36:43.597365] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.476 TLSTESTn1 00:18:03.476 11:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:03.476 Running I/O for 10 seconds... 00:18:05.779 3451.00 IOPS, 13.48 MiB/s [2024-11-15T10:36:47.137Z] 3506.00 IOPS, 13.70 MiB/s [2024-11-15T10:36:48.122Z] 3481.00 IOPS, 13.60 MiB/s [2024-11-15T10:36:49.054Z] 3488.25 IOPS, 13.63 MiB/s [2024-11-15T10:36:49.984Z] 3499.00 IOPS, 13.67 MiB/s [2024-11-15T10:36:50.915Z] 3505.67 IOPS, 13.69 MiB/s [2024-11-15T10:36:51.846Z] 3507.29 IOPS, 13.70 MiB/s [2024-11-15T10:36:53.216Z] 3499.75 IOPS, 13.67 MiB/s [2024-11-15T10:36:54.148Z] 3491.22 IOPS, 13.64 MiB/s [2024-11-15T10:36:54.148Z] 3495.40 IOPS, 13.65 MiB/s 00:18:13.721 Latency(us) 00:18:13.721 [2024-11-15T10:36:54.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.721 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:13.721 Verification LBA range: start 0x0 length 0x2000 00:18:13.721 TLSTESTn1 : 10.02 3500.44 13.67 0.00 0.00 36504.32 8349.77 47380.10 00:18:13.721 [2024-11-15T10:36:54.148Z] =================================================================================================================== 00:18:13.721 [2024-11-15T10:36:54.148Z] Total : 3500.44 13.67 0.00 0.00 36504.32 8349.77 47380.10 00:18:13.721 { 00:18:13.721 "results": [ 00:18:13.721 { 00:18:13.721 "job": "TLSTESTn1", 00:18:13.721 "core_mask": "0x4", 00:18:13.721 "workload": "verify", 00:18:13.721 "status": "finished", 00:18:13.721 "verify_range": { 00:18:13.721 "start": 0, 00:18:13.721 "length": 8192 00:18:13.721 }, 00:18:13.721 "queue_depth": 128, 00:18:13.721 "io_size": 4096, 00:18:13.721 "runtime": 10.021881, 00:18:13.721 "iops": 3500.440685735542, 00:18:13.721 "mibps": 13.673596428654461, 00:18:13.721 "io_failed": 0, 00:18:13.721 "io_timeout": 0, 00:18:13.721 "avg_latency_us": 36504.32007973083, 00:18:13.721 "min_latency_us": 8349.771851851852, 00:18:13.721 "max_latency_us": 47380.10074074074 00:18:13.721 } 00:18:13.721 ], 00:18:13.721 "core_count": 1 00:18:13.721 } 00:18:13.722 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:13.722 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2948000 00:18:13.722 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2948000 ']' 00:18:13.722 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2948000 00:18:13.722 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.722 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.722 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2948000 00:18:13.722 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:13.722 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:13.722 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2948000' 00:18:13.722 killing process with pid 2948000 00:18:13.722 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2948000 00:18:13.722 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.722 00:18:13.722 Latency(us) 00:18:13.722 [2024-11-15T10:36:54.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.722 [2024-11-15T10:36:54.149Z] =================================================================================================================== 00:18:13.722 [2024-11-15T10:36:54.149Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.722 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2948000 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.2HXtexgyle 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2HXtexgyle 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2HXtexgyle 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2HXtexgyle 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2HXtexgyle 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2949323 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2949323 /var/tmp/bdevperf.sock 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2949323 ']' 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.722 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.979 [2024-11-15 11:36:54.169394] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:18:13.979 [2024-11-15 11:36:54.169479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2949323 ] 00:18:13.979 [2024-11-15 11:36:54.236013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.979 [2024-11-15 11:36:54.293721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.237 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.237 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:14.237 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2HXtexgyle 00:18:14.495 [2024-11-15 11:36:54.662802] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2HXtexgyle': 0100666 00:18:14.495 [2024-11-15 11:36:54.662845] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:14.495 request: 00:18:14.495 { 00:18:14.495 "name": "key0", 00:18:14.495 "path": "/tmp/tmp.2HXtexgyle", 00:18:14.495 "method": "keyring_file_add_key", 00:18:14.495 "req_id": 1 00:18:14.495 } 00:18:14.495 Got JSON-RPC error response 00:18:14.495 response: 00:18:14.495 { 00:18:14.495 "code": -1, 00:18:14.495 "message": "Operation not permitted" 00:18:14.495 } 00:18:14.495 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:14.752 [2024-11-15 11:36:54.927591] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.753 [2024-11-15 11:36:54.927645] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:14.753 request: 00:18:14.753 { 00:18:14.753 "name": "TLSTEST", 00:18:14.753 "trtype": "tcp", 00:18:14.753 "traddr": "10.0.0.2", 00:18:14.753 "adrfam": "ipv4", 00:18:14.753 "trsvcid": "4420", 00:18:14.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:14.753 "prchk_reftag": false, 00:18:14.753 "prchk_guard": false, 00:18:14.753 "hdgst": false, 00:18:14.753 "ddgst": false, 00:18:14.753 "psk": "key0", 00:18:14.753 "allow_unrecognized_csi": false, 00:18:14.753 "method": "bdev_nvme_attach_controller", 00:18:14.753 "req_id": 1 00:18:14.753 } 00:18:14.753 Got JSON-RPC error response 00:18:14.753 response: 00:18:14.753 { 00:18:14.753 "code": -126, 00:18:14.753 "message": "Required key not available" 00:18:14.753 } 00:18:14.753 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2949323 00:18:14.753 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2949323 ']' 00:18:14.753 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2949323 00:18:14.753 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:14.753 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.753 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2949323 00:18:14.753 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:14.753 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:14.753 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2949323' 00:18:14.753 killing process with pid 2949323 00:18:14.753 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2949323 00:18:14.753 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.753 00:18:14.753 Latency(us) 00:18:14.753 [2024-11-15T10:36:55.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.753 [2024-11-15T10:36:55.180Z] =================================================================================================================== 00:18:14.753 [2024-11-15T10:36:55.180Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.753 11:36:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2949323 00:18:15.012 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:15.012 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:15.012 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:15.012 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:15.012 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:15.012 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2947716 00:18:15.012 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2947716 ']' 00:18:15.012 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2947716 00:18:15.012 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:15.012 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.012 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2947716 00:18:15.012 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:15.012 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:15.012 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2947716' 00:18:15.012 killing process with pid 2947716 00:18:15.012 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2947716 00:18:15.012 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2947716 00:18:15.269 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:15.269 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:15.269 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.269 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.269 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2949475 00:18:15.269 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:15.269 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2949475 00:18:15.269 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2949475 ']' 00:18:15.269 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.269 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.270 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.270 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.270 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.270 [2024-11-15 11:36:55.538735] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:18:15.270 [2024-11-15 11:36:55.538837] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.270 [2024-11-15 11:36:55.609194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.270 [2024-11-15 11:36:55.661331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.270 [2024-11-15 11:36:55.661382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.270 [2024-11-15 11:36:55.661411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.270 [2024-11-15 11:36:55.661422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.270 [2024-11-15 11:36:55.661432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.270 [2024-11-15 11:36:55.662026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.527 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.527 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:15.527 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:15.527 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:15.527 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.527 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.527 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.2HXtexgyle 00:18:15.527 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:15.527 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.2HXtexgyle 00:18:15.527 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:15.527 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.527 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:15.528 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.528 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.2HXtexgyle 00:18:15.528 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2HXtexgyle 00:18:15.528 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:15.786 [2024-11-15 11:36:56.054852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.786 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:16.045 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:16.303 [2024-11-15 11:36:56.592370] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:16.303 [2024-11-15 11:36:56.592657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.303 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:16.561 malloc0 00:18:16.561 11:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:16.819 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2HXtexgyle 00:18:17.386 [2024-11-15 11:36:57.513743] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2HXtexgyle': 0100666 00:18:17.386 [2024-11-15 11:36:57.513777] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:17.386 request: 00:18:17.386 { 00:18:17.386 "name": "key0", 00:18:17.386 "path": "/tmp/tmp.2HXtexgyle", 00:18:17.386 "method": "keyring_file_add_key", 00:18:17.386 "req_id": 1 00:18:17.386 } 00:18:17.386 Got JSON-RPC error response 00:18:17.386 response: 00:18:17.386 { 00:18:17.386 "code": -1, 00:18:17.386 "message": "Operation not permitted" 00:18:17.386 } 00:18:17.386 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:17.386 [2024-11-15 11:36:57.778489] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:17.386 [2024-11-15 11:36:57.778548] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:17.386 request: 00:18:17.386 { 00:18:17.386 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.386 "host": "nqn.2016-06.io.spdk:host1", 00:18:17.386 "psk": "key0", 00:18:17.386 "method": "nvmf_subsystem_add_host", 00:18:17.386 "req_id": 1 00:18:17.386 } 00:18:17.386 Got JSON-RPC error response 00:18:17.386 response: 00:18:17.386 { 00:18:17.386 "code": -32603, 00:18:17.386 "message": "Internal error" 00:18:17.386 } 00:18:17.386 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:17.386 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:17.386 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:17.386 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:17.386 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2949475 00:18:17.386 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2949475 ']' 00:18:17.386 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2949475 00:18:17.386 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:17.386 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.386 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2949475 00:18:17.645 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:17.645 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:17.645 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2949475' 00:18:17.645 killing process with pid 2949475 00:18:17.645 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2949475 00:18:17.645 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2949475 00:18:17.645 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.2HXtexgyle 00:18:17.645 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:17.645 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:17.645 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.645 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.645 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2949890 00:18:17.645 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:17.645 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2949890 00:18:17.645 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2949890 ']' 00:18:17.645 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.645 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.645 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.645 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.645 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.904 [2024-11-15 11:36:58.113144] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:18:17.904 [2024-11-15 11:36:58.113231] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.904 [2024-11-15 11:36:58.183811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.904 [2024-11-15 11:36:58.238362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.904 [2024-11-15 11:36:58.238421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.904 [2024-11-15 11:36:58.238447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.904 [2024-11-15 11:36:58.238458] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.904 [2024-11-15 11:36:58.238467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.904 [2024-11-15 11:36:58.239021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.162 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.162 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:18.162 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:18.162 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:18.162 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.162 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.162 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.2HXtexgyle 00:18:18.162 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2HXtexgyle 00:18:18.162 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:18.420 [2024-11-15 11:36:58.617128] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.420 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:18.678 11:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:18.938 [2024-11-15 11:36:59.154648] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:18.938 [2024-11-15 11:36:59.154919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.938 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:19.196 malloc0 00:18:19.196 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:19.454 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2HXtexgyle 00:18:19.712 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:19.970 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2950140 00:18:19.970 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.970 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.970 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2950140 /var/tmp/bdevperf.sock 00:18:19.970 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2950140 ']' 00:18:19.970 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.970 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.970 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.970 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.970 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.970 [2024-11-15 11:37:00.296089] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:18:19.970 [2024-11-15 11:37:00.296171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2950140 ] 00:18:19.970 [2024-11-15 11:37:00.366378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.227 [2024-11-15 11:37:00.428985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.227 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.227 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:20.227 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2HXtexgyle 00:18:20.485 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:20.742 [2024-11-15 11:37:01.046814] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.742 TLSTESTn1 00:18:20.742 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:21.309 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:21.310 "subsystems": [ 00:18:21.310 { 00:18:21.310 "subsystem": "keyring", 00:18:21.310 "config": [ 00:18:21.310 { 00:18:21.310 "method": "keyring_file_add_key", 00:18:21.310 "params": { 00:18:21.310 "name": "key0", 00:18:21.310 "path": "/tmp/tmp.2HXtexgyle" 00:18:21.310 } 00:18:21.310 } 00:18:21.310 ] 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "subsystem": "iobuf", 00:18:21.310 "config": [ 00:18:21.310 { 00:18:21.310 "method": "iobuf_set_options", 00:18:21.310 "params": { 00:18:21.310 "small_pool_count": 8192, 00:18:21.310 "large_pool_count": 1024, 00:18:21.310 "small_bufsize": 8192, 00:18:21.310 "large_bufsize": 135168, 00:18:21.310 "enable_numa": false 00:18:21.310 } 00:18:21.310 } 00:18:21.310 ] 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "subsystem": "sock", 00:18:21.310 "config": [ 00:18:21.310 { 00:18:21.310 "method": "sock_set_default_impl", 00:18:21.310 "params": { 00:18:21.310 "impl_name": "posix" 00:18:21.310 } 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "method": "sock_impl_set_options", 00:18:21.310 "params": { 00:18:21.310 "impl_name": "ssl", 00:18:21.310 "recv_buf_size": 4096, 00:18:21.310 "send_buf_size": 4096, 00:18:21.310 "enable_recv_pipe": true, 00:18:21.310 "enable_quickack": false, 00:18:21.310 "enable_placement_id": 0, 00:18:21.310 "enable_zerocopy_send_server": true, 00:18:21.310 "enable_zerocopy_send_client": false, 00:18:21.310 "zerocopy_threshold": 0, 00:18:21.310 "tls_version": 0, 00:18:21.310 "enable_ktls": false 00:18:21.310 } 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "method": "sock_impl_set_options", 00:18:21.310 "params": { 00:18:21.310 "impl_name": "posix", 00:18:21.310 "recv_buf_size": 2097152, 00:18:21.310 "send_buf_size": 2097152, 00:18:21.310 "enable_recv_pipe": true, 00:18:21.310 "enable_quickack": false, 00:18:21.310 "enable_placement_id": 0, 00:18:21.310 "enable_zerocopy_send_server": true, 00:18:21.310 "enable_zerocopy_send_client": false, 00:18:21.310 "zerocopy_threshold": 0, 00:18:21.310 "tls_version": 0, 00:18:21.310 "enable_ktls": false 00:18:21.310 } 00:18:21.310 } 00:18:21.310 ] 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "subsystem": "vmd", 00:18:21.310 "config": [] 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "subsystem": "accel", 00:18:21.310 "config": [ 00:18:21.310 { 00:18:21.310 "method": "accel_set_options", 00:18:21.310 "params": { 00:18:21.310 "small_cache_size": 128, 00:18:21.310 "large_cache_size": 16, 00:18:21.310 "task_count": 2048, 00:18:21.310 "sequence_count": 2048, 00:18:21.310 "buf_count": 2048 00:18:21.310 } 00:18:21.310 } 00:18:21.310 ] 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "subsystem": "bdev", 00:18:21.310 "config": [ 00:18:21.310 { 00:18:21.310 "method": "bdev_set_options", 00:18:21.310 "params": { 00:18:21.310 "bdev_io_pool_size": 65535, 00:18:21.310 "bdev_io_cache_size": 256, 00:18:21.310 "bdev_auto_examine": true, 00:18:21.310 "iobuf_small_cache_size": 128, 00:18:21.310 "iobuf_large_cache_size": 16 00:18:21.310 } 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "method": "bdev_raid_set_options", 00:18:21.310 "params": { 00:18:21.310 "process_window_size_kb": 1024, 00:18:21.310 "process_max_bandwidth_mb_sec": 0 00:18:21.310 } 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "method": "bdev_iscsi_set_options", 00:18:21.310 "params": { 00:18:21.310 "timeout_sec": 30 00:18:21.310 } 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "method": "bdev_nvme_set_options", 00:18:21.310 "params": { 00:18:21.310 "action_on_timeout": "none", 00:18:21.310 "timeout_us": 0, 00:18:21.310 "timeout_admin_us": 0, 00:18:21.310 "keep_alive_timeout_ms": 10000, 00:18:21.310 "arbitration_burst": 0, 00:18:21.310 "low_priority_weight": 0, 00:18:21.310 "medium_priority_weight": 0, 00:18:21.310 "high_priority_weight": 0, 00:18:21.310 "nvme_adminq_poll_period_us": 10000, 00:18:21.310 "nvme_ioq_poll_period_us": 0, 00:18:21.310 "io_queue_requests": 0, 00:18:21.310 "delay_cmd_submit": true, 00:18:21.310 "transport_retry_count": 4, 00:18:21.310 "bdev_retry_count": 3, 00:18:21.310 "transport_ack_timeout": 0, 00:18:21.310 "ctrlr_loss_timeout_sec": 0, 00:18:21.310 "reconnect_delay_sec": 0, 00:18:21.310 "fast_io_fail_timeout_sec": 0, 00:18:21.310 "disable_auto_failback": false, 00:18:21.310 "generate_uuids": false, 00:18:21.310 "transport_tos": 0, 00:18:21.310 "nvme_error_stat": false, 00:18:21.310 "rdma_srq_size": 0, 00:18:21.310 "io_path_stat": false, 00:18:21.310 "allow_accel_sequence": false, 00:18:21.310 "rdma_max_cq_size": 0, 00:18:21.310 "rdma_cm_event_timeout_ms": 0, 00:18:21.310 "dhchap_digests": [ 00:18:21.310 "sha256", 00:18:21.310 "sha384", 00:18:21.310 "sha512" 00:18:21.310 ], 00:18:21.310 "dhchap_dhgroups": [ 00:18:21.310 "null", 00:18:21.310 "ffdhe2048", 00:18:21.310 "ffdhe3072", 00:18:21.310 "ffdhe4096", 00:18:21.310 "ffdhe6144", 00:18:21.310 "ffdhe8192" 00:18:21.310 ] 00:18:21.310 } 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "method": "bdev_nvme_set_hotplug", 00:18:21.310 "params": { 00:18:21.310 "period_us": 100000, 00:18:21.310 "enable": false 00:18:21.310 } 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "method": "bdev_malloc_create", 00:18:21.310 "params": { 00:18:21.310 "name": "malloc0", 00:18:21.310 "num_blocks": 8192, 00:18:21.310 "block_size": 4096, 00:18:21.310 "physical_block_size": 4096, 00:18:21.310 "uuid": "1a6b998a-f818-4f53-9d7b-260984612cab", 00:18:21.310 "optimal_io_boundary": 0, 00:18:21.310 "md_size": 0, 00:18:21.310 "dif_type": 0, 00:18:21.310 "dif_is_head_of_md": false, 00:18:21.310 "dif_pi_format": 0 00:18:21.310 } 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "method": "bdev_wait_for_examine" 00:18:21.310 } 00:18:21.310 ] 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "subsystem": "nbd", 00:18:21.310 "config": [] 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "subsystem": "scheduler", 00:18:21.310 "config": [ 00:18:21.310 { 00:18:21.310 "method": "framework_set_scheduler", 00:18:21.310 "params": { 00:18:21.310 "name": "static" 00:18:21.310 } 00:18:21.310 } 00:18:21.310 ] 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "subsystem": "nvmf", 00:18:21.310 "config": [ 00:18:21.310 { 00:18:21.310 "method": "nvmf_set_config", 00:18:21.310 "params": { 00:18:21.310 "discovery_filter": "match_any", 00:18:21.310 "admin_cmd_passthru": { 00:18:21.310 "identify_ctrlr": false 00:18:21.310 }, 00:18:21.310 "dhchap_digests": [ 00:18:21.310 "sha256", 00:18:21.310 "sha384", 00:18:21.310 "sha512" 00:18:21.310 ], 00:18:21.310 "dhchap_dhgroups": [ 00:18:21.310 "null", 00:18:21.310 "ffdhe2048", 00:18:21.310 "ffdhe3072", 00:18:21.310 "ffdhe4096", 00:18:21.310 "ffdhe6144", 00:18:21.310 "ffdhe8192" 00:18:21.310 ] 00:18:21.310 } 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "method": "nvmf_set_max_subsystems", 00:18:21.310 "params": { 00:18:21.310 "max_subsystems": 1024 00:18:21.310 } 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "method": "nvmf_set_crdt", 00:18:21.310 "params": { 00:18:21.310 "crdt1": 0, 00:18:21.310 "crdt2": 0, 00:18:21.310 "crdt3": 0 00:18:21.310 } 00:18:21.310 }, 00:18:21.310 { 00:18:21.310 "method": "nvmf_create_transport", 00:18:21.310 "params": { 00:18:21.310 "trtype": "TCP", 00:18:21.310 "max_queue_depth": 128, 00:18:21.311 "max_io_qpairs_per_ctrlr": 127, 00:18:21.311 "in_capsule_data_size": 4096, 00:18:21.311 "max_io_size": 131072, 00:18:21.311 "io_unit_size": 131072, 00:18:21.311 "max_aq_depth": 128, 00:18:21.311 "num_shared_buffers": 511, 00:18:21.311 "buf_cache_size": 4294967295, 00:18:21.311 "dif_insert_or_strip": false, 00:18:21.311 "zcopy": false, 00:18:21.311 "c2h_success": false, 00:18:21.311 "sock_priority": 0, 00:18:21.311 "abort_timeout_sec": 1, 00:18:21.311 "ack_timeout": 0, 00:18:21.311 "data_wr_pool_size": 0 00:18:21.311 } 00:18:21.311 }, 00:18:21.311 { 00:18:21.311 "method": "nvmf_create_subsystem", 00:18:21.311 "params": { 00:18:21.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.311 "allow_any_host": false, 00:18:21.311 "serial_number": "SPDK00000000000001", 00:18:21.311 "model_number": "SPDK bdev Controller", 00:18:21.311 "max_namespaces": 10, 00:18:21.311 "min_cntlid": 1, 00:18:21.311 "max_cntlid": 65519, 00:18:21.311 "ana_reporting": false 00:18:21.311 } 00:18:21.311 }, 00:18:21.311 { 00:18:21.311 "method": "nvmf_subsystem_add_host", 00:18:21.311 "params": { 00:18:21.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.311 "host": "nqn.2016-06.io.spdk:host1", 00:18:21.311 "psk": "key0" 00:18:21.311 } 00:18:21.311 }, 00:18:21.311 { 00:18:21.311 "method": "nvmf_subsystem_add_ns", 00:18:21.311 "params": { 00:18:21.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.311 "namespace": { 00:18:21.311 "nsid": 1, 00:18:21.311 "bdev_name": "malloc0", 00:18:21.311 "nguid": "1A6B998AF8184F539D7B260984612CAB", 00:18:21.311 "uuid": "1a6b998a-f818-4f53-9d7b-260984612cab", 00:18:21.311 "no_auto_visible": false 00:18:21.311 } 00:18:21.311 } 00:18:21.311 }, 00:18:21.311 { 00:18:21.311 "method": "nvmf_subsystem_add_listener", 00:18:21.311 "params": { 00:18:21.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.311 "listen_address": { 00:18:21.311 "trtype": "TCP", 00:18:21.311 "adrfam": "IPv4", 00:18:21.311 "traddr": "10.0.0.2", 00:18:21.311 "trsvcid": "4420" 00:18:21.311 }, 00:18:21.311 "secure_channel": true 00:18:21.311 } 00:18:21.311 } 00:18:21.311 ] 00:18:21.311 } 00:18:21.311 ] 00:18:21.311 }' 00:18:21.311 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:21.569 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:21.569 "subsystems": [ 00:18:21.569 { 00:18:21.569 "subsystem": "keyring", 00:18:21.569 "config": [ 00:18:21.569 { 00:18:21.569 "method": "keyring_file_add_key", 00:18:21.569 "params": { 00:18:21.569 "name": "key0", 00:18:21.569 "path": "/tmp/tmp.2HXtexgyle" 00:18:21.569 } 00:18:21.569 } 00:18:21.569 ] 00:18:21.569 }, 00:18:21.569 { 00:18:21.569 "subsystem": "iobuf", 00:18:21.569 "config": [ 00:18:21.569 { 00:18:21.569 "method": "iobuf_set_options", 00:18:21.569 "params": { 00:18:21.569 "small_pool_count": 8192, 00:18:21.569 "large_pool_count": 1024, 00:18:21.569 "small_bufsize": 8192, 00:18:21.569 "large_bufsize": 135168, 00:18:21.569 "enable_numa": false 00:18:21.569 } 00:18:21.569 } 00:18:21.569 ] 00:18:21.569 }, 00:18:21.569 { 00:18:21.569 "subsystem": "sock", 00:18:21.569 "config": [ 00:18:21.569 { 00:18:21.569 "method": "sock_set_default_impl", 00:18:21.569 "params": { 00:18:21.569 "impl_name": "posix" 00:18:21.569 } 00:18:21.569 }, 00:18:21.569 { 00:18:21.569 "method": "sock_impl_set_options", 00:18:21.569 "params": { 00:18:21.569 "impl_name": "ssl", 00:18:21.569 "recv_buf_size": 4096, 00:18:21.569 "send_buf_size": 4096, 00:18:21.569 "enable_recv_pipe": true, 00:18:21.569 "enable_quickack": false, 00:18:21.569 "enable_placement_id": 0, 00:18:21.569 "enable_zerocopy_send_server": true, 00:18:21.569 "enable_zerocopy_send_client": false, 00:18:21.569 "zerocopy_threshold": 0, 00:18:21.570 "tls_version": 0, 00:18:21.570 "enable_ktls": false 00:18:21.570 } 00:18:21.570 }, 00:18:21.570 { 00:18:21.570 "method": "sock_impl_set_options", 00:18:21.570 "params": { 00:18:21.570 "impl_name": "posix", 00:18:21.570 "recv_buf_size": 2097152, 00:18:21.570 "send_buf_size": 2097152, 00:18:21.570 "enable_recv_pipe": true, 00:18:21.570 "enable_quickack": false, 00:18:21.570 "enable_placement_id": 0, 00:18:21.570 "enable_zerocopy_send_server": true, 00:18:21.570 "enable_zerocopy_send_client": false, 00:18:21.570 "zerocopy_threshold": 0, 00:18:21.570 "tls_version": 0, 00:18:21.570 "enable_ktls": false 00:18:21.570 } 00:18:21.570 } 00:18:21.570 ] 00:18:21.570 }, 00:18:21.570 { 00:18:21.570 "subsystem": "vmd", 00:18:21.570 "config": [] 00:18:21.570 }, 00:18:21.570 { 00:18:21.570 "subsystem": "accel", 00:18:21.570 "config": [ 00:18:21.570 { 00:18:21.570 "method": "accel_set_options", 00:18:21.570 "params": { 00:18:21.570 "small_cache_size": 128, 00:18:21.570 "large_cache_size": 16, 00:18:21.570 "task_count": 2048, 00:18:21.570 "sequence_count": 2048, 00:18:21.570 "buf_count": 2048 00:18:21.570 } 00:18:21.570 } 00:18:21.570 ] 00:18:21.570 }, 00:18:21.570 { 00:18:21.570 "subsystem": "bdev", 00:18:21.570 "config": [ 00:18:21.570 { 00:18:21.570 "method": "bdev_set_options", 00:18:21.570 "params": { 00:18:21.570 "bdev_io_pool_size": 65535, 00:18:21.570 "bdev_io_cache_size": 256, 00:18:21.570 "bdev_auto_examine": true, 00:18:21.570 "iobuf_small_cache_size": 128, 00:18:21.570 "iobuf_large_cache_size": 16 00:18:21.570 } 00:18:21.570 }, 00:18:21.570 { 00:18:21.570 "method": "bdev_raid_set_options", 00:18:21.570 "params": { 00:18:21.570 "process_window_size_kb": 1024, 00:18:21.570 "process_max_bandwidth_mb_sec": 0 00:18:21.570 } 00:18:21.570 }, 00:18:21.570 { 00:18:21.570 "method": "bdev_iscsi_set_options", 00:18:21.570 "params": { 00:18:21.570 "timeout_sec": 30 00:18:21.570 } 00:18:21.570 }, 00:18:21.570 { 00:18:21.570 "method": "bdev_nvme_set_options", 00:18:21.570 "params": { 00:18:21.570 "action_on_timeout": "none", 00:18:21.570 "timeout_us": 0, 00:18:21.570 "timeout_admin_us": 0, 00:18:21.570 "keep_alive_timeout_ms": 10000, 00:18:21.570 "arbitration_burst": 0, 00:18:21.570 "low_priority_weight": 0, 00:18:21.570 "medium_priority_weight": 0, 00:18:21.570 "high_priority_weight": 0, 00:18:21.570 "nvme_adminq_poll_period_us": 10000, 00:18:21.570 "nvme_ioq_poll_period_us": 0, 00:18:21.570 "io_queue_requests": 512, 00:18:21.570 "delay_cmd_submit": true, 00:18:21.570 "transport_retry_count": 4, 00:18:21.570 "bdev_retry_count": 3, 00:18:21.570 "transport_ack_timeout": 0, 00:18:21.570 "ctrlr_loss_timeout_sec": 0, 00:18:21.570 "reconnect_delay_sec": 0, 00:18:21.570 "fast_io_fail_timeout_sec": 0, 00:18:21.570 "disable_auto_failback": false, 00:18:21.570 "generate_uuids": false, 00:18:21.570 "transport_tos": 0, 00:18:21.570 "nvme_error_stat": false, 00:18:21.570 "rdma_srq_size": 0, 00:18:21.570 "io_path_stat": false, 00:18:21.570 "allow_accel_sequence": false, 00:18:21.570 "rdma_max_cq_size": 0, 00:18:21.570 "rdma_cm_event_timeout_ms": 0, 00:18:21.570 "dhchap_digests": [ 00:18:21.570 "sha256", 00:18:21.570 "sha384", 00:18:21.570 "sha512" 00:18:21.570 ], 00:18:21.570 "dhchap_dhgroups": [ 00:18:21.570 "null", 00:18:21.570 "ffdhe2048", 00:18:21.570 "ffdhe3072", 00:18:21.570 "ffdhe4096", 00:18:21.570 "ffdhe6144", 00:18:21.570 "ffdhe8192" 00:18:21.570 ] 00:18:21.570 } 00:18:21.570 }, 00:18:21.570 { 00:18:21.570 "method": "bdev_nvme_attach_controller", 00:18:21.570 "params": { 00:18:21.570 "name": "TLSTEST", 00:18:21.570 "trtype": "TCP", 00:18:21.570 "adrfam": "IPv4", 00:18:21.570 "traddr": "10.0.0.2", 00:18:21.570 "trsvcid": "4420", 00:18:21.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.570 "prchk_reftag": false, 00:18:21.570 "prchk_guard": false, 00:18:21.570 "ctrlr_loss_timeout_sec": 0, 00:18:21.570 "reconnect_delay_sec": 0, 00:18:21.570 "fast_io_fail_timeout_sec": 0, 00:18:21.570 "psk": "key0", 00:18:21.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:21.570 "hdgst": false, 00:18:21.570 "ddgst": false, 00:18:21.570 "multipath": "multipath" 00:18:21.570 } 00:18:21.570 }, 00:18:21.570 { 00:18:21.570 "method": "bdev_nvme_set_hotplug", 00:18:21.570 "params": { 00:18:21.570 "period_us": 100000, 00:18:21.570 "enable": false 00:18:21.570 } 00:18:21.570 }, 00:18:21.570 { 00:18:21.570 "method": "bdev_wait_for_examine" 00:18:21.570 } 00:18:21.570 ] 00:18:21.570 }, 00:18:21.570 { 00:18:21.570 "subsystem": "nbd", 00:18:21.570 "config": [] 00:18:21.570 } 00:18:21.570 ] 00:18:21.570 }' 00:18:21.570 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2950140 00:18:21.570 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2950140 ']' 00:18:21.570 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2950140 00:18:21.570 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:21.570 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.570 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2950140 00:18:21.570 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:21.570 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:21.570 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2950140' 00:18:21.570 killing process with pid 2950140 00:18:21.570 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2950140 00:18:21.570 Received shutdown signal, test time was about 10.000000 seconds 00:18:21.570 00:18:21.570 Latency(us) 00:18:21.570 [2024-11-15T10:37:01.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.570 [2024-11-15T10:37:01.997Z] =================================================================================================================== 00:18:21.570 [2024-11-15T10:37:01.997Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:21.570 11:37:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2950140 00:18:21.828 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2949890 00:18:21.828 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2949890 ']' 00:18:21.828 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2949890 00:18:21.828 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:21.828 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.828 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2949890 00:18:21.828 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:21.828 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:21.828 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2949890' 00:18:21.828 killing process with pid 2949890 00:18:21.828 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2949890 00:18:21.828 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2949890 00:18:22.086 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:22.086 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:22.086 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:22.086 "subsystems": [ 00:18:22.086 { 00:18:22.086 "subsystem": "keyring", 00:18:22.086 "config": [ 00:18:22.086 { 00:18:22.086 "method": "keyring_file_add_key", 00:18:22.086 "params": { 00:18:22.086 "name": "key0", 00:18:22.086 "path": "/tmp/tmp.2HXtexgyle" 00:18:22.086 } 00:18:22.086 } 00:18:22.086 ] 00:18:22.086 }, 00:18:22.086 { 00:18:22.086 "subsystem": "iobuf", 00:18:22.086 "config": [ 00:18:22.086 { 00:18:22.086 "method": "iobuf_set_options", 00:18:22.086 "params": { 00:18:22.086 "small_pool_count": 8192, 00:18:22.086 "large_pool_count": 1024, 00:18:22.086 "small_bufsize": 8192, 00:18:22.086 "large_bufsize": 135168, 00:18:22.086 "enable_numa": false 00:18:22.086 } 00:18:22.086 } 00:18:22.086 ] 00:18:22.086 }, 00:18:22.086 { 00:18:22.086 "subsystem": "sock", 00:18:22.086 "config": [ 00:18:22.086 { 00:18:22.086 "method": "sock_set_default_impl", 00:18:22.086 "params": { 00:18:22.086 "impl_name": "posix" 00:18:22.086 } 00:18:22.086 }, 00:18:22.086 { 00:18:22.086 "method": "sock_impl_set_options", 00:18:22.086 "params": { 00:18:22.086 "impl_name": "ssl", 00:18:22.086 "recv_buf_size": 4096, 00:18:22.086 "send_buf_size": 4096, 00:18:22.086 "enable_recv_pipe": true, 00:18:22.086 "enable_quickack": false, 00:18:22.086 "enable_placement_id": 0, 00:18:22.086 "enable_zerocopy_send_server": true, 00:18:22.086 "enable_zerocopy_send_client": false, 00:18:22.086 "zerocopy_threshold": 0, 00:18:22.086 "tls_version": 0, 00:18:22.086 "enable_ktls": false 00:18:22.086 } 00:18:22.086 }, 00:18:22.086 { 00:18:22.086 "method": "sock_impl_set_options", 00:18:22.086 "params": { 00:18:22.086 "impl_name": "posix", 00:18:22.086 "recv_buf_size": 2097152, 00:18:22.086 "send_buf_size": 2097152, 00:18:22.086 "enable_recv_pipe": true, 00:18:22.086 "enable_quickack": false, 00:18:22.086 "enable_placement_id": 0, 00:18:22.086 "enable_zerocopy_send_server": true, 00:18:22.086 "enable_zerocopy_send_client": false, 00:18:22.086 "zerocopy_threshold": 0, 00:18:22.086 "tls_version": 0, 00:18:22.086 "enable_ktls": false 00:18:22.086 } 00:18:22.086 } 00:18:22.086 ] 00:18:22.086 }, 00:18:22.086 { 00:18:22.086 "subsystem": "vmd", 00:18:22.086 "config": [] 00:18:22.086 }, 00:18:22.086 { 00:18:22.086 "subsystem": "accel", 00:18:22.086 "config": [ 00:18:22.086 { 00:18:22.086 "method": "accel_set_options", 00:18:22.086 "params": { 00:18:22.086 "small_cache_size": 128, 00:18:22.086 "large_cache_size": 16, 00:18:22.086 "task_count": 2048, 00:18:22.086 "sequence_count": 2048, 00:18:22.086 "buf_count": 2048 00:18:22.086 } 00:18:22.086 } 00:18:22.086 ] 00:18:22.086 }, 00:18:22.086 { 00:18:22.086 "subsystem": "bdev", 00:18:22.086 "config": [ 00:18:22.086 { 00:18:22.086 "method": "bdev_set_options", 00:18:22.086 "params": { 00:18:22.086 "bdev_io_pool_size": 65535, 00:18:22.086 "bdev_io_cache_size": 256, 00:18:22.086 "bdev_auto_examine": true, 00:18:22.086 "iobuf_small_cache_size": 128, 00:18:22.086 "iobuf_large_cache_size": 16 00:18:22.086 } 00:18:22.086 }, 00:18:22.086 { 00:18:22.086 "method": "bdev_raid_set_options", 00:18:22.086 "params": { 00:18:22.086 "process_window_size_kb": 1024, 00:18:22.086 "process_max_bandwidth_mb_sec": 0 00:18:22.086 } 00:18:22.086 }, 00:18:22.086 { 00:18:22.086 "method": "bdev_iscsi_set_options", 00:18:22.086 "params": { 00:18:22.086 "timeout_sec": 30 00:18:22.086 } 00:18:22.086 }, 00:18:22.086 { 00:18:22.086 "method": "bdev_nvme_set_options", 00:18:22.086 "params": { 00:18:22.087 "action_on_timeout": "none", 00:18:22.087 "timeout_us": 0, 00:18:22.087 "timeout_admin_us": 0, 00:18:22.087 "keep_alive_timeout_ms": 10000, 00:18:22.087 "arbitration_burst": 0, 00:18:22.087 "low_priority_weight": 0, 00:18:22.087 "medium_priority_weight": 0, 00:18:22.087 "high_priority_weight": 0, 00:18:22.087 "nvme_adminq_poll_period_us": 10000, 00:18:22.087 "nvme_ioq_poll_period_us": 0, 00:18:22.087 "io_queue_requests": 0, 00:18:22.087 "delay_cmd_submit": true, 00:18:22.087 "transport_retry_count": 4, 00:18:22.087 "bdev_retry_count": 3, 00:18:22.087 "transport_ack_timeout": 0, 00:18:22.087 "ctrlr_loss_timeout_sec": 0, 00:18:22.087 "reconnect_delay_sec": 0, 00:18:22.087 "fast_io_fail_timeout_sec": 0, 00:18:22.087 "disable_auto_failback": false, 00:18:22.087 "generate_uuids": false, 00:18:22.087 "transport_tos": 0, 00:18:22.087 "nvme_error_stat": false, 00:18:22.087 "rdma_srq_size": 0, 00:18:22.087 "io_path_stat": false, 00:18:22.087 "allow_accel_sequence": false, 00:18:22.087 "rdma_max_cq_size": 0, 00:18:22.087 "rdma_cm_event_timeout_ms": 0, 00:18:22.087 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:22.087 "dhchap_digests": [ 00:18:22.087 "sha256", 00:18:22.087 "sha384", 00:18:22.087 "sha512" 00:18:22.087 ], 00:18:22.087 "dhchap_dhgroups": [ 00:18:22.087 "null", 00:18:22.087 "ffdhe2048", 00:18:22.087 "ffdhe3072", 00:18:22.087 "ffdhe4096", 00:18:22.087 "ffdhe6144", 00:18:22.087 "ffdhe8192" 00:18:22.087 ] 00:18:22.087 } 00:18:22.087 }, 00:18:22.087 { 00:18:22.087 "method": "bdev_nvme_set_hotplug", 00:18:22.087 "params": { 00:18:22.087 "period_us": 100000, 00:18:22.087 "enable": false 00:18:22.087 } 00:18:22.087 }, 00:18:22.087 { 00:18:22.087 "method": "bdev_malloc_create", 00:18:22.087 "params": { 00:18:22.087 "name": "malloc0", 00:18:22.087 "num_blocks": 8192, 00:18:22.087 "block_size": 4096, 00:18:22.087 "physical_block_size": 4096, 00:18:22.087 "uuid": "1a6b998a-f818-4f53-9d7b-260984612cab", 00:18:22.087 "optimal_io_boundary": 0, 00:18:22.087 "md_size": 0, 00:18:22.087 "dif_type": 0, 00:18:22.087 "dif_is_head_of_md": false, 00:18:22.087 "dif_pi_format": 0 00:18:22.087 } 00:18:22.087 }, 00:18:22.087 { 00:18:22.087 "method": "bdev_wait_for_examine" 00:18:22.087 } 00:18:22.087 ] 00:18:22.087 }, 00:18:22.087 { 00:18:22.087 "subsystem": "nbd", 00:18:22.087 "config": [] 00:18:22.087 }, 00:18:22.087 { 00:18:22.087 "subsystem": "scheduler", 00:18:22.087 "config": [ 00:18:22.087 { 00:18:22.087 "method": "framework_set_scheduler", 00:18:22.087 "params": { 00:18:22.087 "name": "static" 00:18:22.087 } 00:18:22.087 } 00:18:22.087 ] 00:18:22.087 }, 00:18:22.087 { 00:18:22.087 "subsystem": "nvmf", 00:18:22.087 "config": [ 00:18:22.087 { 00:18:22.087 "method": "nvmf_set_config", 00:18:22.087 "params": { 00:18:22.087 "discovery_filter": "match_any", 00:18:22.087 "admin_cmd_passthru": { 00:18:22.087 "identify_ctrlr": false 00:18:22.087 }, 00:18:22.087 "dhchap_digests": [ 00:18:22.087 "sha256", 00:18:22.087 "sha384", 00:18:22.087 "sha512" 00:18:22.087 ], 00:18:22.087 "dhchap_dhgroups": [ 00:18:22.087 "null", 00:18:22.087 "ffdhe2048", 00:18:22.087 "ffdhe3072", 00:18:22.087 "ffdhe4096", 00:18:22.087 "ffdhe6144", 00:18:22.087 "ffdhe8192" 00:18:22.087 ] 00:18:22.087 } 00:18:22.087 }, 00:18:22.087 { 00:18:22.087 "method": "nvmf_set_max_subsystems", 00:18:22.087 "params": { 00:18:22.087 "max_subsystems": 1024 00:18:22.087 } 00:18:22.087 }, 00:18:22.087 { 00:18:22.087 "method": "nvmf_set_crdt", 00:18:22.087 "params": { 00:18:22.087 "crdt1": 0, 00:18:22.087 "crdt2": 0, 00:18:22.087 "crdt3": 0 00:18:22.087 } 00:18:22.087 }, 00:18:22.087 { 00:18:22.087 "method": "nvmf_create_transport", 00:18:22.087 "params": { 00:18:22.087 "trtype": "TCP", 00:18:22.087 "max_queue_depth": 128, 00:18:22.087 "max_io_qpairs_per_ctrlr": 127, 00:18:22.087 "in_capsule_data_size": 4096, 00:18:22.087 "max_io_size": 131072, 00:18:22.087 "io_unit_size": 131072, 00:18:22.087 "max_aq_depth": 128, 00:18:22.087 "num_shared_buffers": 511, 00:18:22.087 "buf_cache_size": 4294967295, 00:18:22.087 "dif_insert_or_strip": false, 00:18:22.087 "zcopy": false, 00:18:22.087 "c2h_success": false, 00:18:22.087 "sock_priority": 0, 00:18:22.087 "abort_timeout_sec": 1, 00:18:22.087 "ack_timeout": 0, 00:18:22.087 "data_wr_pool_size": 0 00:18:22.087 } 00:18:22.087 }, 00:18:22.087 { 00:18:22.087 "method": "nvmf_create_subsystem", 00:18:22.087 "params": { 00:18:22.087 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.087 "allow_any_host": false, 00:18:22.087 "serial_number": "SPDK00000000000001", 00:18:22.087 "model_number": "SPDK bdev Controller", 00:18:22.087 "max_namespaces": 10, 00:18:22.087 "min_cntlid": 1, 00:18:22.087 "max_cntlid": 65519, 00:18:22.087 "ana_reporting": false 00:18:22.087 } 00:18:22.087 }, 00:18:22.087 { 00:18:22.087 "method": "nvmf_subsystem_add_host", 00:18:22.087 "params": { 00:18:22.087 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.087 "host": "nqn.2016-06.io.spdk:host1", 00:18:22.087 "psk": "key0" 00:18:22.087 } 00:18:22.087 }, 00:18:22.087 { 00:18:22.087 "method": "nvmf_subsystem_add_ns", 00:18:22.087 "params": { 00:18:22.087 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.087 "namespace": { 00:18:22.087 "nsid": 1, 00:18:22.087 "bdev_name": "malloc0", 00:18:22.087 "nguid": "1A6B998AF8184F539D7B260984612CAB", 00:18:22.087 "uuid": "1a6b998a-f818-4f53-9d7b-260984612cab", 00:18:22.087 "no_auto_visible": false 00:18:22.087 } 00:18:22.087 } 00:18:22.087 }, 00:18:22.087 { 00:18:22.087 "method": "nvmf_subsystem_add_listener", 00:18:22.087 "params": { 00:18:22.087 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.087 "listen_address": { 00:18:22.087 "trtype": "TCP", 00:18:22.087 "adrfam": "IPv4", 00:18:22.087 "traddr": "10.0.0.2", 00:18:22.087 "trsvcid": "4420" 00:18:22.087 }, 00:18:22.087 "secure_channel": true 00:18:22.087 } 00:18:22.087 } 00:18:22.087 ] 00:18:22.087 } 00:18:22.087 ] 00:18:22.087 }' 00:18:22.087 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.087 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2950339 00:18:22.087 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:22.087 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2950339 00:18:22.087 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2950339 ']' 00:18:22.087 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.087 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.087 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.087 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.087 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.087 [2024-11-15 11:37:02.426772] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:18:22.087 [2024-11-15 11:37:02.426855] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.087 [2024-11-15 11:37:02.502701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.345 [2024-11-15 11:37:02.560506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.345 [2024-11-15 11:37:02.560560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.345 [2024-11-15 11:37:02.560574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.345 [2024-11-15 11:37:02.560586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.345 [2024-11-15 11:37:02.560595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.345 [2024-11-15 11:37:02.561210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.603 [2024-11-15 11:37:02.805213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.603 [2024-11-15 11:37:02.837221] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:22.603 [2024-11-15 11:37:02.837509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.169 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.169 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:23.169 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:23.169 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:23.169 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.169 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.169 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2950493 00:18:23.169 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2950493 /var/tmp/bdevperf.sock 00:18:23.169 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2950493 ']' 00:18:23.169 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.169 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:23.169 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.169 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:23.169 "subsystems": [ 00:18:23.170 { 00:18:23.170 "subsystem": "keyring", 00:18:23.170 "config": [ 00:18:23.170 { 00:18:23.170 "method": "keyring_file_add_key", 00:18:23.170 "params": { 00:18:23.170 "name": "key0", 00:18:23.170 "path": "/tmp/tmp.2HXtexgyle" 00:18:23.170 } 00:18:23.170 } 00:18:23.170 ] 00:18:23.170 }, 00:18:23.170 { 00:18:23.170 "subsystem": "iobuf", 00:18:23.170 "config": [ 00:18:23.170 { 00:18:23.170 "method": "iobuf_set_options", 00:18:23.170 "params": { 00:18:23.170 "small_pool_count": 8192, 00:18:23.170 "large_pool_count": 1024, 00:18:23.170 "small_bufsize": 8192, 00:18:23.170 "large_bufsize": 135168, 00:18:23.170 "enable_numa": false 00:18:23.170 } 00:18:23.170 } 00:18:23.170 ] 00:18:23.170 }, 00:18:23.170 { 00:18:23.170 "subsystem": "sock", 00:18:23.170 "config": [ 00:18:23.170 { 00:18:23.170 "method": "sock_set_default_impl", 00:18:23.170 "params": { 00:18:23.170 "impl_name": "posix" 00:18:23.170 } 00:18:23.170 }, 00:18:23.170 { 00:18:23.170 "method": "sock_impl_set_options", 00:18:23.170 "params": { 00:18:23.170 "impl_name": "ssl", 00:18:23.170 "recv_buf_size": 4096, 00:18:23.170 "send_buf_size": 4096, 00:18:23.170 "enable_recv_pipe": true, 00:18:23.170 "enable_quickack": false, 00:18:23.170 "enable_placement_id": 0, 00:18:23.170 "enable_zerocopy_send_server": true, 00:18:23.170 "enable_zerocopy_send_client": false, 00:18:23.170 "zerocopy_threshold": 0, 00:18:23.170 "tls_version": 0, 00:18:23.170 "enable_ktls": false 00:18:23.170 } 00:18:23.170 }, 00:18:23.170 { 00:18:23.170 "method": "sock_impl_set_options", 00:18:23.170 "params": { 00:18:23.170 "impl_name": "posix", 00:18:23.170 "recv_buf_size": 2097152, 00:18:23.170 "send_buf_size": 2097152, 00:18:23.170 "enable_recv_pipe": true, 00:18:23.170 "enable_quickack": false, 00:18:23.170 "enable_placement_id": 0, 00:18:23.170 "enable_zerocopy_send_server": true, 00:18:23.170 "enable_zerocopy_send_client": false, 00:18:23.170 "zerocopy_threshold": 0, 00:18:23.170 "tls_version": 0, 00:18:23.170 "enable_ktls": false 00:18:23.170 } 00:18:23.170 } 00:18:23.170 ] 00:18:23.170 }, 00:18:23.170 { 00:18:23.170 "subsystem": "vmd", 00:18:23.170 "config": [] 00:18:23.170 }, 00:18:23.170 { 00:18:23.170 "subsystem": "accel", 00:18:23.170 "config": [ 00:18:23.170 { 00:18:23.170 "method": "accel_set_options", 00:18:23.170 "params": { 00:18:23.170 "small_cache_size": 128, 00:18:23.170 "large_cache_size": 16, 00:18:23.170 "task_count": 2048, 00:18:23.170 "sequence_count": 2048, 00:18:23.170 "buf_count": 2048 00:18:23.170 } 00:18:23.170 } 00:18:23.170 ] 00:18:23.170 }, 00:18:23.170 { 00:18:23.170 "subsystem": "bdev", 00:18:23.170 "config": [ 00:18:23.170 { 00:18:23.170 "method": "bdev_set_options", 00:18:23.170 "params": { 00:18:23.170 "bdev_io_pool_size": 65535, 00:18:23.170 "bdev_io_cache_size": 256, 00:18:23.170 "bdev_auto_examine": true, 00:18:23.170 "iobuf_small_cache_size": 128, 00:18:23.170 "iobuf_large_cache_size": 16 00:18:23.170 } 00:18:23.170 }, 00:18:23.170 { 00:18:23.170 "method": "bdev_raid_set_options", 00:18:23.170 "params": { 00:18:23.170 "process_window_size_kb": 1024, 00:18:23.170 "process_max_bandwidth_mb_sec": 0 00:18:23.170 } 00:18:23.170 }, 00:18:23.170 { 00:18:23.170 "method": "bdev_iscsi_set_options", 00:18:23.170 "params": { 00:18:23.170 "timeout_sec": 30 00:18:23.170 } 00:18:23.170 }, 00:18:23.170 { 00:18:23.170 "method": "bdev_nvme_set_options", 00:18:23.170 "params": { 00:18:23.170 "action_on_timeout": "none", 00:18:23.170 "timeout_us": 0, 00:18:23.170 "timeout_admin_us": 0, 00:18:23.170 "keep_alive_timeout_ms": 10000, 00:18:23.170 "arbitration_burst": 0, 00:18:23.170 "low_priority_weight": 0, 00:18:23.170 "medium_priority_weight": 0, 00:18:23.170 "high_priority_weight": 0, 00:18:23.170 "nvme_adminq_poll_period_us": 10000, 00:18:23.170 "nvme_ioq_poll_period_us": 0, 00:18:23.170 "io_queue_requests": 512, 00:18:23.170 "delay_cmd_submit": true, 00:18:23.170 "transport_retry_count": 4, 00:18:23.170 "bdev_retry_count": 3, 00:18:23.170 "transport_ack_timeout": 0, 00:18:23.170 "ctrlr_loss_timeout_sec": 0, 00:18:23.170 "reconnect_delay_sec": 0, 00:18:23.170 "fast_io_fail_timeout_sec": 0, 00:18:23.170 "disable_auto_failback": false, 00:18:23.170 "generate_uuids": false, 00:18:23.170 "transport_tos": 0, 00:18:23.170 "nvme_error_stat": false, 00:18:23.170 "rdma_srq_size": 0, 00:18:23.170 "io_path_stat": false, 00:18:23.170 "allow_accel_sequence": false, 00:18:23.170 "rdma_max_cq_size": 0, 00:18:23.170 "rdma_cm_event_timeout_ms": 0 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.170 , 00:18:23.170 "dhchap_digests": [ 00:18:23.170 "sha256", 00:18:23.170 "sha384", 00:18:23.170 "sha512" 00:18:23.170 ], 00:18:23.170 "dhchap_dhgroups": [ 00:18:23.170 "null", 00:18:23.170 "ffdhe2048", 00:18:23.170 "ffdhe3072", 00:18:23.170 "ffdhe4096", 00:18:23.170 "ffdhe6144", 00:18:23.170 "ffdhe8192" 00:18:23.170 ] 00:18:23.170 } 00:18:23.170 }, 00:18:23.170 { 00:18:23.170 "method": "bdev_nvme_attach_controller", 00:18:23.170 "params": { 00:18:23.170 "name": "TLSTEST", 00:18:23.170 "trtype": "TCP", 00:18:23.170 "adrfam": "IPv4", 00:18:23.170 "traddr": "10.0.0.2", 00:18:23.170 "trsvcid": "4420", 00:18:23.170 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.170 "prchk_reftag": false, 00:18:23.170 "prchk_guard": false, 00:18:23.170 "ctrlr_loss_timeout_sec": 0, 00:18:23.170 "reconnect_delay_sec": 0, 00:18:23.170 "fast_io_fail_timeout_sec": 0, 00:18:23.170 "psk": "key0", 00:18:23.170 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:23.170 "hdgst": false, 00:18:23.170 "ddgst": false, 00:18:23.170 "multipath": "multipath" 00:18:23.170 } 00:18:23.170 }, 00:18:23.170 { 00:18:23.170 "method": "bdev_nvme_set_hotplug", 00:18:23.170 "params": { 00:18:23.170 "period_us": 100000, 00:18:23.170 "enable": false 00:18:23.170 } 00:18:23.170 }, 00:18:23.170 { 00:18:23.170 "method": "bdev_wait_for_examine" 00:18:23.170 } 00:18:23.170 ] 00:18:23.170 }, 00:18:23.170 { 00:18:23.170 "subsystem": "nbd", 00:18:23.170 "config": [] 00:18:23.170 } 00:18:23.170 ] 00:18:23.170 }' 00:18:23.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.170 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.170 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.170 [2024-11-15 11:37:03.548421] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:18:23.170 [2024-11-15 11:37:03.548508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2950493 ] 00:18:23.428 [2024-11-15 11:37:03.618338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.428 [2024-11-15 11:37:03.675783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.428 [2024-11-15 11:37:03.852337] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:23.685 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.685 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:23.685 11:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:23.685 Running I/O for 10 seconds... 00:18:25.988 3234.00 IOPS, 12.63 MiB/s [2024-11-15T10:37:07.344Z] 3290.50 IOPS, 12.85 MiB/s [2024-11-15T10:37:08.276Z] 3391.67 IOPS, 13.25 MiB/s [2024-11-15T10:37:09.208Z] 3441.50 IOPS, 13.44 MiB/s [2024-11-15T10:37:10.139Z] 3467.80 IOPS, 13.55 MiB/s [2024-11-15T10:37:11.508Z] 3472.50 IOPS, 13.56 MiB/s [2024-11-15T10:37:12.439Z] 3483.57 IOPS, 13.61 MiB/s [2024-11-15T10:37:13.467Z] 3495.00 IOPS, 13.65 MiB/s [2024-11-15T10:37:14.399Z] 3494.56 IOPS, 13.65 MiB/s [2024-11-15T10:37:14.399Z] 3504.00 IOPS, 13.69 MiB/s 00:18:33.972 Latency(us) 00:18:33.972 [2024-11-15T10:37:14.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.972 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:33.972 Verification LBA range: start 0x0 length 0x2000 00:18:33.972 TLSTESTn1 : 10.02 3508.86 13.71 0.00 0.00 36411.05 8252.68 73400.32 00:18:33.972 [2024-11-15T10:37:14.399Z] =================================================================================================================== 00:18:33.972 [2024-11-15T10:37:14.399Z] Total : 3508.86 13.71 0.00 0.00 36411.05 8252.68 73400.32 00:18:33.972 { 00:18:33.972 "results": [ 00:18:33.972 { 00:18:33.972 "job": "TLSTESTn1", 00:18:33.972 "core_mask": "0x4", 00:18:33.972 "workload": "verify", 00:18:33.972 "status": "finished", 00:18:33.972 "verify_range": { 00:18:33.972 "start": 0, 00:18:33.972 "length": 8192 00:18:33.972 }, 00:18:33.972 "queue_depth": 128, 00:18:33.972 "io_size": 4096, 00:18:33.972 "runtime": 10.022334, 00:18:33.972 "iops": 3508.8633046952937, 00:18:33.972 "mibps": 13.70649728396599, 00:18:33.972 "io_failed": 0, 00:18:33.972 "io_timeout": 0, 00:18:33.972 "avg_latency_us": 36411.04648335087, 00:18:33.972 "min_latency_us": 8252.68148148148, 00:18:33.972 "max_latency_us": 73400.32 00:18:33.972 } 00:18:33.972 ], 00:18:33.972 "core_count": 1 00:18:33.972 } 00:18:33.972 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:33.972 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2950493 00:18:33.972 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2950493 ']' 00:18:33.972 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2950493 00:18:33.972 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:33.972 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.972 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2950493 00:18:33.972 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:33.972 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:33.972 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2950493' 00:18:33.972 killing process with pid 2950493 00:18:33.972 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2950493 00:18:33.972 Received shutdown signal, test time was about 10.000000 seconds 00:18:33.972 00:18:33.972 Latency(us) 00:18:33.972 [2024-11-15T10:37:14.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.972 [2024-11-15T10:37:14.399Z] =================================================================================================================== 00:18:33.972 [2024-11-15T10:37:14.399Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:33.972 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2950493 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2950339 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2950339 ']' 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2950339 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2950339 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2950339' 00:18:34.231 killing process with pid 2950339 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2950339 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2950339 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2951821 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2951821 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2951821 ']' 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.231 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.489 [2024-11-15 11:37:14.699163] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:18:34.489 [2024-11-15 11:37:14.699242] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.489 [2024-11-15 11:37:14.770530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.489 [2024-11-15 11:37:14.826387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.489 [2024-11-15 11:37:14.826438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.489 [2024-11-15 11:37:14.826466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.489 [2024-11-15 11:37:14.826477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.489 [2024-11-15 11:37:14.826487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.489 [2024-11-15 11:37:14.827079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.746 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.746 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:34.746 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:34.746 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:34.746 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.746 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.746 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.2HXtexgyle 00:18:34.746 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2HXtexgyle 00:18:34.746 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:35.004 [2024-11-15 11:37:15.230698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.004 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:35.262 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:35.519 [2024-11-15 11:37:15.768116] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:35.519 [2024-11-15 11:37:15.768411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.519 11:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:35.777 malloc0 00:18:35.777 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:36.036 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2HXtexgyle 00:18:36.293 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:36.552 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2952108 00:18:36.552 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:36.552 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:36.552 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2952108 /var/tmp/bdevperf.sock 00:18:36.552 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2952108 ']' 00:18:36.552 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.552 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.552 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.552 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.552 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.810 [2024-11-15 11:37:16.993933] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:18:36.810 [2024-11-15 11:37:16.994015] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2952108 ] 00:18:36.810 [2024-11-15 11:37:17.060820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.810 [2024-11-15 11:37:17.118204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.068 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.068 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:37.068 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2HXtexgyle 00:18:37.325 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:37.583 [2024-11-15 11:37:17.794548] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:37.583 nvme0n1 00:18:37.583 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:37.583 Running I/O for 1 seconds... 00:18:38.954 3498.00 IOPS, 13.66 MiB/s 00:18:38.954 Latency(us) 00:18:38.954 [2024-11-15T10:37:19.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.954 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:38.954 Verification LBA range: start 0x0 length 0x2000 00:18:38.954 nvme0n1 : 1.02 3552.10 13.88 0.00 0.00 35693.10 5898.24 52428.80 00:18:38.954 [2024-11-15T10:37:19.381Z] =================================================================================================================== 00:18:38.954 [2024-11-15T10:37:19.381Z] Total : 3552.10 13.88 0.00 0.00 35693.10 5898.24 52428.80 00:18:38.954 { 00:18:38.954 "results": [ 00:18:38.954 { 00:18:38.954 "job": "nvme0n1", 00:18:38.954 "core_mask": "0x2", 00:18:38.954 "workload": "verify", 00:18:38.954 "status": "finished", 00:18:38.954 "verify_range": { 00:18:38.954 "start": 0, 00:18:38.954 "length": 8192 00:18:38.954 }, 00:18:38.954 "queue_depth": 128, 00:18:38.954 "io_size": 4096, 00:18:38.954 "runtime": 1.020805, 00:18:38.954 "iops": 3552.098588858793, 00:18:38.954 "mibps": 13.87538511272966, 00:18:38.954 "io_failed": 0, 00:18:38.954 "io_timeout": 0, 00:18:38.954 "avg_latency_us": 35693.09843394415, 00:18:38.954 "min_latency_us": 5898.24, 00:18:38.954 "max_latency_us": 52428.8 00:18:38.954 } 00:18:38.954 ], 00:18:38.954 "core_count": 1 00:18:38.954 } 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2952108 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2952108 ']' 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2952108 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2952108 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2952108' 00:18:38.954 killing process with pid 2952108 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2952108 00:18:38.954 Received shutdown signal, test time was about 1.000000 seconds 00:18:38.954 00:18:38.954 Latency(us) 00:18:38.954 [2024-11-15T10:37:19.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.954 [2024-11-15T10:37:19.381Z] =================================================================================================================== 00:18:38.954 [2024-11-15T10:37:19.381Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2952108 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2951821 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2951821 ']' 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2951821 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2951821 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2951821' 00:18:38.954 killing process with pid 2951821 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2951821 00:18:38.954 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2951821 00:18:39.211 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:39.211 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:39.211 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:39.211 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.211 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2952454 00:18:39.211 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:39.211 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2952454 00:18:39.211 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2952454 ']' 00:18:39.211 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.211 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.211 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.211 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.211 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.212 [2024-11-15 11:37:19.630363] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:18:39.212 [2024-11-15 11:37:19.630454] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.469 [2024-11-15 11:37:19.704880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.469 [2024-11-15 11:37:19.760993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.469 [2024-11-15 11:37:19.761045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.469 [2024-11-15 11:37:19.761074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.469 [2024-11-15 11:37:19.761086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.469 [2024-11-15 11:37:19.761096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.469 [2024-11-15 11:37:19.761672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.469 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.469 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:39.469 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:39.469 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:39.470 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.728 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.728 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:39.728 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.728 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.728 [2024-11-15 11:37:19.900408] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.728 malloc0 00:18:39.728 [2024-11-15 11:37:19.931956] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:39.728 [2024-11-15 11:37:19.932210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.728 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.728 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2952532 00:18:39.728 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:39.728 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2952532 /var/tmp/bdevperf.sock 00:18:39.728 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2952532 ']' 00:18:39.728 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.728 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.728 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.728 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.728 11:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.728 [2024-11-15 11:37:20.004063] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:18:39.728 [2024-11-15 11:37:20.004132] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2952532 ] 00:18:39.728 [2024-11-15 11:37:20.075359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.728 [2024-11-15 11:37:20.133875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.986 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.986 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:39.986 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2HXtexgyle 00:18:40.243 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:40.500 [2024-11-15 11:37:20.867748] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:40.758 nvme0n1 00:18:40.758 11:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:40.758 Running I/O for 1 seconds... 00:18:41.690 3544.00 IOPS, 13.84 MiB/s 00:18:41.690 Latency(us) 00:18:41.690 [2024-11-15T10:37:22.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.690 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:41.690 Verification LBA range: start 0x0 length 0x2000 00:18:41.690 nvme0n1 : 1.02 3586.24 14.01 0.00 0.00 35324.14 9563.40 24660.95 00:18:41.690 [2024-11-15T10:37:22.117Z] =================================================================================================================== 00:18:41.690 [2024-11-15T10:37:22.117Z] Total : 3586.24 14.01 0.00 0.00 35324.14 9563.40 24660.95 00:18:41.690 { 00:18:41.690 "results": [ 00:18:41.690 { 00:18:41.690 "job": "nvme0n1", 00:18:41.690 "core_mask": "0x2", 00:18:41.690 "workload": "verify", 00:18:41.690 "status": "finished", 00:18:41.690 "verify_range": { 00:18:41.690 "start": 0, 00:18:41.690 "length": 8192 00:18:41.690 }, 00:18:41.690 "queue_depth": 128, 00:18:41.690 "io_size": 4096, 00:18:41.690 "runtime": 1.023915, 00:18:41.690 "iops": 3586.235185537862, 00:18:41.690 "mibps": 14.008731193507273, 00:18:41.690 "io_failed": 0, 00:18:41.690 "io_timeout": 0, 00:18:41.690 "avg_latency_us": 35324.14461066731, 00:18:41.690 "min_latency_us": 9563.401481481482, 00:18:41.690 "max_latency_us": 24660.954074074074 00:18:41.690 } 00:18:41.690 ], 00:18:41.690 "core_count": 1 00:18:41.690 } 00:18:41.690 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:41.690 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.690 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.947 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.947 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:41.947 "subsystems": [ 00:18:41.947 { 00:18:41.947 "subsystem": "keyring", 00:18:41.947 "config": [ 00:18:41.947 { 00:18:41.947 "method": "keyring_file_add_key", 00:18:41.947 "params": { 00:18:41.947 "name": "key0", 00:18:41.947 "path": "/tmp/tmp.2HXtexgyle" 00:18:41.947 } 00:18:41.947 } 00:18:41.947 ] 00:18:41.947 }, 00:18:41.947 { 00:18:41.947 "subsystem": "iobuf", 00:18:41.947 "config": [ 00:18:41.947 { 00:18:41.947 "method": "iobuf_set_options", 00:18:41.947 "params": { 00:18:41.947 "small_pool_count": 8192, 00:18:41.947 "large_pool_count": 1024, 00:18:41.947 "small_bufsize": 8192, 00:18:41.947 "large_bufsize": 135168, 00:18:41.947 "enable_numa": false 00:18:41.947 } 00:18:41.947 } 00:18:41.947 ] 00:18:41.947 }, 00:18:41.947 { 00:18:41.947 "subsystem": "sock", 00:18:41.947 "config": [ 00:18:41.947 { 00:18:41.947 "method": "sock_set_default_impl", 00:18:41.947 "params": { 00:18:41.947 "impl_name": "posix" 00:18:41.947 } 00:18:41.947 }, 00:18:41.947 { 00:18:41.947 "method": "sock_impl_set_options", 00:18:41.947 "params": { 00:18:41.947 "impl_name": "ssl", 00:18:41.947 "recv_buf_size": 4096, 00:18:41.947 "send_buf_size": 4096, 00:18:41.947 "enable_recv_pipe": true, 00:18:41.947 "enable_quickack": false, 00:18:41.947 "enable_placement_id": 0, 00:18:41.947 "enable_zerocopy_send_server": true, 00:18:41.947 "enable_zerocopy_send_client": false, 00:18:41.947 "zerocopy_threshold": 0, 00:18:41.947 "tls_version": 0, 00:18:41.947 "enable_ktls": false 00:18:41.947 } 00:18:41.947 }, 00:18:41.947 { 00:18:41.947 "method": "sock_impl_set_options", 00:18:41.947 "params": { 00:18:41.947 "impl_name": "posix", 00:18:41.947 "recv_buf_size": 2097152, 00:18:41.947 "send_buf_size": 2097152, 00:18:41.947 "enable_recv_pipe": true, 00:18:41.947 "enable_quickack": false, 00:18:41.947 "enable_placement_id": 0, 00:18:41.947 "enable_zerocopy_send_server": true, 00:18:41.947 "enable_zerocopy_send_client": false, 00:18:41.947 "zerocopy_threshold": 0, 00:18:41.947 "tls_version": 0, 00:18:41.947 "enable_ktls": false 00:18:41.947 } 00:18:41.947 } 00:18:41.947 ] 00:18:41.947 }, 00:18:41.947 { 00:18:41.947 "subsystem": "vmd", 00:18:41.947 "config": [] 00:18:41.947 }, 00:18:41.947 { 00:18:41.947 "subsystem": "accel", 00:18:41.947 "config": [ 00:18:41.947 { 00:18:41.947 "method": "accel_set_options", 00:18:41.947 "params": { 00:18:41.947 "small_cache_size": 128, 00:18:41.947 "large_cache_size": 16, 00:18:41.947 "task_count": 2048, 00:18:41.947 "sequence_count": 2048, 00:18:41.947 "buf_count": 2048 00:18:41.947 } 00:18:41.947 } 00:18:41.947 ] 00:18:41.947 }, 00:18:41.947 { 00:18:41.947 "subsystem": "bdev", 00:18:41.947 "config": [ 00:18:41.947 { 00:18:41.947 "method": "bdev_set_options", 00:18:41.947 "params": { 00:18:41.947 "bdev_io_pool_size": 65535, 00:18:41.947 "bdev_io_cache_size": 256, 00:18:41.947 "bdev_auto_examine": true, 00:18:41.947 "iobuf_small_cache_size": 128, 00:18:41.947 "iobuf_large_cache_size": 16 00:18:41.947 } 00:18:41.947 }, 00:18:41.947 { 00:18:41.947 "method": "bdev_raid_set_options", 00:18:41.947 "params": { 00:18:41.947 "process_window_size_kb": 1024, 00:18:41.947 "process_max_bandwidth_mb_sec": 0 00:18:41.947 } 00:18:41.947 }, 00:18:41.947 { 00:18:41.947 "method": "bdev_iscsi_set_options", 00:18:41.947 "params": { 00:18:41.947 "timeout_sec": 30 00:18:41.947 } 00:18:41.947 }, 00:18:41.947 { 00:18:41.947 "method": "bdev_nvme_set_options", 00:18:41.947 "params": { 00:18:41.947 "action_on_timeout": "none", 00:18:41.947 "timeout_us": 0, 00:18:41.947 "timeout_admin_us": 0, 00:18:41.947 "keep_alive_timeout_ms": 10000, 00:18:41.947 "arbitration_burst": 0, 00:18:41.947 "low_priority_weight": 0, 00:18:41.947 "medium_priority_weight": 0, 00:18:41.947 "high_priority_weight": 0, 00:18:41.947 "nvme_adminq_poll_period_us": 10000, 00:18:41.947 "nvme_ioq_poll_period_us": 0, 00:18:41.947 "io_queue_requests": 0, 00:18:41.947 "delay_cmd_submit": true, 00:18:41.947 "transport_retry_count": 4, 00:18:41.947 "bdev_retry_count": 3, 00:18:41.947 "transport_ack_timeout": 0, 00:18:41.947 "ctrlr_loss_timeout_sec": 0, 00:18:41.947 "reconnect_delay_sec": 0, 00:18:41.947 "fast_io_fail_timeout_sec": 0, 00:18:41.947 "disable_auto_failback": false, 00:18:41.947 "generate_uuids": false, 00:18:41.947 "transport_tos": 0, 00:18:41.947 "nvme_error_stat": false, 00:18:41.947 "rdma_srq_size": 0, 00:18:41.947 "io_path_stat": false, 00:18:41.947 "allow_accel_sequence": false, 00:18:41.947 "rdma_max_cq_size": 0, 00:18:41.947 "rdma_cm_event_timeout_ms": 0, 00:18:41.947 "dhchap_digests": [ 00:18:41.947 "sha256", 00:18:41.947 "sha384", 00:18:41.947 "sha512" 00:18:41.947 ], 00:18:41.947 "dhchap_dhgroups": [ 00:18:41.947 "null", 00:18:41.947 "ffdhe2048", 00:18:41.947 "ffdhe3072", 00:18:41.947 "ffdhe4096", 00:18:41.947 "ffdhe6144", 00:18:41.947 "ffdhe8192" 00:18:41.947 ] 00:18:41.947 } 00:18:41.947 }, 00:18:41.947 { 00:18:41.947 "method": "bdev_nvme_set_hotplug", 00:18:41.947 "params": { 00:18:41.947 "period_us": 100000, 00:18:41.947 "enable": false 00:18:41.947 } 00:18:41.947 }, 00:18:41.947 { 00:18:41.947 "method": "bdev_malloc_create", 00:18:41.947 "params": { 00:18:41.947 "name": "malloc0", 00:18:41.947 "num_blocks": 8192, 00:18:41.947 "block_size": 4096, 00:18:41.947 "physical_block_size": 4096, 00:18:41.947 "uuid": "d86ecbf6-8032-4b10-989d-550f7b9c9856", 00:18:41.947 "optimal_io_boundary": 0, 00:18:41.947 "md_size": 0, 00:18:41.947 "dif_type": 0, 00:18:41.947 "dif_is_head_of_md": false, 00:18:41.947 "dif_pi_format": 0 00:18:41.947 } 00:18:41.947 }, 00:18:41.947 { 00:18:41.947 "method": "bdev_wait_for_examine" 00:18:41.947 } 00:18:41.947 ] 00:18:41.947 }, 00:18:41.947 { 00:18:41.947 "subsystem": "nbd", 00:18:41.947 "config": [] 00:18:41.947 }, 00:18:41.947 { 00:18:41.947 "subsystem": "scheduler", 00:18:41.947 "config": [ 00:18:41.947 { 00:18:41.947 "method": "framework_set_scheduler", 00:18:41.947 "params": { 00:18:41.947 "name": "static" 00:18:41.947 } 00:18:41.947 } 00:18:41.947 ] 00:18:41.947 }, 00:18:41.947 { 00:18:41.947 "subsystem": "nvmf", 00:18:41.947 "config": [ 00:18:41.947 { 00:18:41.947 "method": "nvmf_set_config", 00:18:41.947 "params": { 00:18:41.947 "discovery_filter": "match_any", 00:18:41.947 "admin_cmd_passthru": { 00:18:41.947 "identify_ctrlr": false 00:18:41.947 }, 00:18:41.947 "dhchap_digests": [ 00:18:41.947 "sha256", 00:18:41.947 "sha384", 00:18:41.947 "sha512" 00:18:41.948 ], 00:18:41.948 "dhchap_dhgroups": [ 00:18:41.948 "null", 00:18:41.948 "ffdhe2048", 00:18:41.948 "ffdhe3072", 00:18:41.948 "ffdhe4096", 00:18:41.948 "ffdhe6144", 00:18:41.948 "ffdhe8192" 00:18:41.948 ] 00:18:41.948 } 00:18:41.948 }, 00:18:41.948 { 00:18:41.948 "method": "nvmf_set_max_subsystems", 00:18:41.948 "params": { 00:18:41.948 "max_subsystems": 1024 00:18:41.948 } 00:18:41.948 }, 00:18:41.948 { 00:18:41.948 "method": "nvmf_set_crdt", 00:18:41.948 "params": { 00:18:41.948 "crdt1": 0, 00:18:41.948 "crdt2": 0, 00:18:41.948 "crdt3": 0 00:18:41.948 } 00:18:41.948 }, 00:18:41.948 { 00:18:41.948 "method": "nvmf_create_transport", 00:18:41.948 "params": { 00:18:41.948 "trtype": "TCP", 00:18:41.948 "max_queue_depth": 128, 00:18:41.948 "max_io_qpairs_per_ctrlr": 127, 00:18:41.948 "in_capsule_data_size": 4096, 00:18:41.948 "max_io_size": 131072, 00:18:41.948 "io_unit_size": 131072, 00:18:41.948 "max_aq_depth": 128, 00:18:41.948 "num_shared_buffers": 511, 00:18:41.948 "buf_cache_size": 4294967295, 00:18:41.948 "dif_insert_or_strip": false, 00:18:41.948 "zcopy": false, 00:18:41.948 "c2h_success": false, 00:18:41.948 "sock_priority": 0, 00:18:41.948 "abort_timeout_sec": 1, 00:18:41.948 "ack_timeout": 0, 00:18:41.948 "data_wr_pool_size": 0 00:18:41.948 } 00:18:41.948 }, 00:18:41.948 { 00:18:41.948 "method": "nvmf_create_subsystem", 00:18:41.948 "params": { 00:18:41.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.948 "allow_any_host": false, 00:18:41.948 "serial_number": "00000000000000000000", 00:18:41.948 "model_number": "SPDK bdev Controller", 00:18:41.948 "max_namespaces": 32, 00:18:41.948 "min_cntlid": 1, 00:18:41.948 "max_cntlid": 65519, 00:18:41.948 "ana_reporting": false 00:18:41.948 } 00:18:41.948 }, 00:18:41.948 { 00:18:41.948 "method": "nvmf_subsystem_add_host", 00:18:41.948 "params": { 00:18:41.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.948 "host": "nqn.2016-06.io.spdk:host1", 00:18:41.948 "psk": "key0" 00:18:41.948 } 00:18:41.948 }, 00:18:41.948 { 00:18:41.948 "method": "nvmf_subsystem_add_ns", 00:18:41.948 "params": { 00:18:41.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.948 "namespace": { 00:18:41.948 "nsid": 1, 00:18:41.948 "bdev_name": "malloc0", 00:18:41.948 "nguid": "D86ECBF680324B10989D550F7B9C9856", 00:18:41.948 "uuid": "d86ecbf6-8032-4b10-989d-550f7b9c9856", 00:18:41.948 "no_auto_visible": false 00:18:41.948 } 00:18:41.948 } 00:18:41.948 }, 00:18:41.948 { 00:18:41.948 "method": "nvmf_subsystem_add_listener", 00:18:41.948 "params": { 00:18:41.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.948 "listen_address": { 00:18:41.948 "trtype": "TCP", 00:18:41.948 "adrfam": "IPv4", 00:18:41.948 "traddr": "10.0.0.2", 00:18:41.948 "trsvcid": "4420" 00:18:41.948 }, 00:18:41.948 "secure_channel": false, 00:18:41.948 "sock_impl": "ssl" 00:18:41.948 } 00:18:41.948 } 00:18:41.948 ] 00:18:41.948 } 00:18:41.948 ] 00:18:41.948 }' 00:18:41.948 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:42.206 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:42.206 "subsystems": [ 00:18:42.206 { 00:18:42.206 "subsystem": "keyring", 00:18:42.206 "config": [ 00:18:42.206 { 00:18:42.206 "method": "keyring_file_add_key", 00:18:42.206 "params": { 00:18:42.206 "name": "key0", 00:18:42.206 "path": "/tmp/tmp.2HXtexgyle" 00:18:42.206 } 00:18:42.207 } 00:18:42.207 ] 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "subsystem": "iobuf", 00:18:42.207 "config": [ 00:18:42.207 { 00:18:42.207 "method": "iobuf_set_options", 00:18:42.207 "params": { 00:18:42.207 "small_pool_count": 8192, 00:18:42.207 "large_pool_count": 1024, 00:18:42.207 "small_bufsize": 8192, 00:18:42.207 "large_bufsize": 135168, 00:18:42.207 "enable_numa": false 00:18:42.207 } 00:18:42.207 } 00:18:42.207 ] 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "subsystem": "sock", 00:18:42.207 "config": [ 00:18:42.207 { 00:18:42.207 "method": "sock_set_default_impl", 00:18:42.207 "params": { 00:18:42.207 "impl_name": "posix" 00:18:42.207 } 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "method": "sock_impl_set_options", 00:18:42.207 "params": { 00:18:42.207 "impl_name": "ssl", 00:18:42.207 "recv_buf_size": 4096, 00:18:42.207 "send_buf_size": 4096, 00:18:42.207 "enable_recv_pipe": true, 00:18:42.207 "enable_quickack": false, 00:18:42.207 "enable_placement_id": 0, 00:18:42.207 "enable_zerocopy_send_server": true, 00:18:42.207 "enable_zerocopy_send_client": false, 00:18:42.207 "zerocopy_threshold": 0, 00:18:42.207 "tls_version": 0, 00:18:42.207 "enable_ktls": false 00:18:42.207 } 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "method": "sock_impl_set_options", 00:18:42.207 "params": { 00:18:42.207 "impl_name": "posix", 00:18:42.207 "recv_buf_size": 2097152, 00:18:42.207 "send_buf_size": 2097152, 00:18:42.207 "enable_recv_pipe": true, 00:18:42.207 "enable_quickack": false, 00:18:42.207 "enable_placement_id": 0, 00:18:42.207 "enable_zerocopy_send_server": true, 00:18:42.207 "enable_zerocopy_send_client": false, 00:18:42.207 "zerocopy_threshold": 0, 00:18:42.207 "tls_version": 0, 00:18:42.207 "enable_ktls": false 00:18:42.207 } 00:18:42.207 } 00:18:42.207 ] 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "subsystem": "vmd", 00:18:42.207 "config": [] 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "subsystem": "accel", 00:18:42.207 "config": [ 00:18:42.207 { 00:18:42.207 "method": "accel_set_options", 00:18:42.207 "params": { 00:18:42.207 "small_cache_size": 128, 00:18:42.207 "large_cache_size": 16, 00:18:42.207 "task_count": 2048, 00:18:42.207 "sequence_count": 2048, 00:18:42.207 "buf_count": 2048 00:18:42.207 } 00:18:42.207 } 00:18:42.207 ] 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "subsystem": "bdev", 00:18:42.207 "config": [ 00:18:42.207 { 00:18:42.207 "method": "bdev_set_options", 00:18:42.207 "params": { 00:18:42.207 "bdev_io_pool_size": 65535, 00:18:42.207 "bdev_io_cache_size": 256, 00:18:42.207 "bdev_auto_examine": true, 00:18:42.207 "iobuf_small_cache_size": 128, 00:18:42.207 "iobuf_large_cache_size": 16 00:18:42.207 } 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "method": "bdev_raid_set_options", 00:18:42.207 "params": { 00:18:42.207 "process_window_size_kb": 1024, 00:18:42.207 "process_max_bandwidth_mb_sec": 0 00:18:42.207 } 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "method": "bdev_iscsi_set_options", 00:18:42.207 "params": { 00:18:42.207 "timeout_sec": 30 00:18:42.207 } 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "method": "bdev_nvme_set_options", 00:18:42.207 "params": { 00:18:42.207 "action_on_timeout": "none", 00:18:42.207 "timeout_us": 0, 00:18:42.207 "timeout_admin_us": 0, 00:18:42.207 "keep_alive_timeout_ms": 10000, 00:18:42.207 "arbitration_burst": 0, 00:18:42.207 "low_priority_weight": 0, 00:18:42.207 "medium_priority_weight": 0, 00:18:42.207 "high_priority_weight": 0, 00:18:42.207 "nvme_adminq_poll_period_us": 10000, 00:18:42.207 "nvme_ioq_poll_period_us": 0, 00:18:42.207 "io_queue_requests": 512, 00:18:42.207 "delay_cmd_submit": true, 00:18:42.207 "transport_retry_count": 4, 00:18:42.207 "bdev_retry_count": 3, 00:18:42.207 "transport_ack_timeout": 0, 00:18:42.207 "ctrlr_loss_timeout_sec": 0, 00:18:42.207 "reconnect_delay_sec": 0, 00:18:42.207 "fast_io_fail_timeout_sec": 0, 00:18:42.207 "disable_auto_failback": false, 00:18:42.207 "generate_uuids": false, 00:18:42.207 "transport_tos": 0, 00:18:42.207 "nvme_error_stat": false, 00:18:42.207 "rdma_srq_size": 0, 00:18:42.207 "io_path_stat": false, 00:18:42.207 "allow_accel_sequence": false, 00:18:42.207 "rdma_max_cq_size": 0, 00:18:42.207 "rdma_cm_event_timeout_ms": 0, 00:18:42.207 "dhchap_digests": [ 00:18:42.207 "sha256", 00:18:42.207 "sha384", 00:18:42.207 "sha512" 00:18:42.207 ], 00:18:42.207 "dhchap_dhgroups": [ 00:18:42.207 "null", 00:18:42.207 "ffdhe2048", 00:18:42.207 "ffdhe3072", 00:18:42.207 "ffdhe4096", 00:18:42.207 "ffdhe6144", 00:18:42.207 "ffdhe8192" 00:18:42.207 ] 00:18:42.207 } 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "method": "bdev_nvme_attach_controller", 00:18:42.207 "params": { 00:18:42.207 "name": "nvme0", 00:18:42.207 "trtype": "TCP", 00:18:42.207 "adrfam": "IPv4", 00:18:42.207 "traddr": "10.0.0.2", 00:18:42.207 "trsvcid": "4420", 00:18:42.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.207 "prchk_reftag": false, 00:18:42.207 "prchk_guard": false, 00:18:42.207 "ctrlr_loss_timeout_sec": 0, 00:18:42.207 "reconnect_delay_sec": 0, 00:18:42.207 "fast_io_fail_timeout_sec": 0, 00:18:42.207 "psk": "key0", 00:18:42.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.207 "hdgst": false, 00:18:42.207 "ddgst": false, 00:18:42.207 "multipath": "multipath" 00:18:42.207 } 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "method": "bdev_nvme_set_hotplug", 00:18:42.207 "params": { 00:18:42.207 "period_us": 100000, 00:18:42.207 "enable": false 00:18:42.207 } 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "method": "bdev_enable_histogram", 00:18:42.207 "params": { 00:18:42.207 "name": "nvme0n1", 00:18:42.207 "enable": true 00:18:42.207 } 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "method": "bdev_wait_for_examine" 00:18:42.207 } 00:18:42.207 ] 00:18:42.207 }, 00:18:42.207 { 00:18:42.207 "subsystem": "nbd", 00:18:42.207 "config": [] 00:18:42.207 } 00:18:42.207 ] 00:18:42.207 }' 00:18:42.207 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2952532 00:18:42.207 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2952532 ']' 00:18:42.207 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2952532 00:18:42.207 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:42.207 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.207 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2952532 00:18:42.207 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:42.207 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:42.208 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2952532' 00:18:42.208 killing process with pid 2952532 00:18:42.208 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2952532 00:18:42.208 Received shutdown signal, test time was about 1.000000 seconds 00:18:42.208 00:18:42.208 Latency(us) 00:18:42.208 [2024-11-15T10:37:22.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.208 [2024-11-15T10:37:22.635Z] =================================================================================================================== 00:18:42.208 [2024-11-15T10:37:22.635Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.208 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2952532 00:18:42.465 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2952454 00:18:42.465 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2952454 ']' 00:18:42.465 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2952454 00:18:42.465 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:42.465 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.465 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2952454 00:18:42.465 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.465 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.465 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2952454' 00:18:42.465 killing process with pid 2952454 00:18:42.465 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2952454 00:18:42.465 11:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2952454 00:18:42.723 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:42.723 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:42.723 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:42.723 "subsystems": [ 00:18:42.723 { 00:18:42.723 "subsystem": "keyring", 00:18:42.723 "config": [ 00:18:42.723 { 00:18:42.723 "method": "keyring_file_add_key", 00:18:42.723 "params": { 00:18:42.723 "name": "key0", 00:18:42.723 "path": "/tmp/tmp.2HXtexgyle" 00:18:42.723 } 00:18:42.723 } 00:18:42.723 ] 00:18:42.723 }, 00:18:42.723 { 00:18:42.723 "subsystem": "iobuf", 00:18:42.723 "config": [ 00:18:42.723 { 00:18:42.723 "method": "iobuf_set_options", 00:18:42.723 "params": { 00:18:42.723 "small_pool_count": 8192, 00:18:42.723 "large_pool_count": 1024, 00:18:42.723 "small_bufsize": 8192, 00:18:42.723 "large_bufsize": 135168, 00:18:42.723 "enable_numa": false 00:18:42.723 } 00:18:42.723 } 00:18:42.723 ] 00:18:42.723 }, 00:18:42.723 { 00:18:42.723 "subsystem": "sock", 00:18:42.723 "config": [ 00:18:42.723 { 00:18:42.723 "method": "sock_set_default_impl", 00:18:42.723 "params": { 00:18:42.723 "impl_name": "posix" 00:18:42.723 } 00:18:42.723 }, 00:18:42.723 { 00:18:42.723 "method": "sock_impl_set_options", 00:18:42.723 "params": { 00:18:42.723 "impl_name": "ssl", 00:18:42.723 "recv_buf_size": 4096, 00:18:42.723 "send_buf_size": 4096, 00:18:42.723 "enable_recv_pipe": true, 00:18:42.723 "enable_quickack": false, 00:18:42.723 "enable_placement_id": 0, 00:18:42.723 "enable_zerocopy_send_server": true, 00:18:42.723 "enable_zerocopy_send_client": false, 00:18:42.723 "zerocopy_threshold": 0, 00:18:42.723 "tls_version": 0, 00:18:42.723 "enable_ktls": false 00:18:42.723 } 00:18:42.723 }, 00:18:42.723 { 00:18:42.723 "method": "sock_impl_set_options", 00:18:42.723 "params": { 00:18:42.723 "impl_name": "posix", 00:18:42.723 "recv_buf_size": 2097152, 00:18:42.723 "send_buf_size": 2097152, 00:18:42.723 "enable_recv_pipe": true, 00:18:42.723 "enable_quickack": false, 00:18:42.723 "enable_placement_id": 0, 00:18:42.723 "enable_zerocopy_send_server": true, 00:18:42.723 "enable_zerocopy_send_client": false, 00:18:42.723 "zerocopy_threshold": 0, 00:18:42.723 "tls_version": 0, 00:18:42.723 "enable_ktls": false 00:18:42.723 } 00:18:42.723 } 00:18:42.723 ] 00:18:42.723 }, 00:18:42.723 { 00:18:42.723 "subsystem": "vmd", 00:18:42.723 "config": [] 00:18:42.723 }, 00:18:42.723 { 00:18:42.723 "subsystem": "accel", 00:18:42.723 "config": [ 00:18:42.723 { 00:18:42.723 "method": "accel_set_options", 00:18:42.723 "params": { 00:18:42.723 "small_cache_size": 128, 00:18:42.723 "large_cache_size": 16, 00:18:42.723 "task_count": 2048, 00:18:42.723 "sequence_count": 2048, 00:18:42.723 "buf_count": 2048 00:18:42.723 } 00:18:42.723 } 00:18:42.723 ] 00:18:42.723 }, 00:18:42.723 { 00:18:42.723 "subsystem": "bdev", 00:18:42.723 "config": [ 00:18:42.723 { 00:18:42.723 "method": "bdev_set_options", 00:18:42.723 "params": { 00:18:42.723 "bdev_io_pool_size": 65535, 00:18:42.723 "bdev_io_cache_size": 256, 00:18:42.723 "bdev_auto_examine": true, 00:18:42.723 "iobuf_small_cache_size": 128, 00:18:42.723 "iobuf_large_cache_size": 16 00:18:42.723 } 00:18:42.723 }, 00:18:42.723 { 00:18:42.723 "method": "bdev_raid_set_options", 00:18:42.723 "params": { 00:18:42.723 "process_window_size_kb": 1024, 00:18:42.723 "process_max_bandwidth_mb_sec": 0 00:18:42.723 } 00:18:42.723 }, 00:18:42.723 { 00:18:42.723 "method": "bdev_iscsi_set_options", 00:18:42.723 "params": { 00:18:42.723 "timeout_sec": 30 00:18:42.723 } 00:18:42.723 }, 00:18:42.723 { 00:18:42.723 "method": "bdev_nvme_set_options", 00:18:42.723 "params": { 00:18:42.723 "action_on_timeout": "none", 00:18:42.723 "timeout_us": 0, 00:18:42.723 "timeout_admin_us": 0, 00:18:42.723 "keep_alive_timeout_ms": 10000, 00:18:42.723 "arbitration_burst": 0, 00:18:42.723 "low_priority_weight": 0, 00:18:42.723 "medium_priority_weight": 0, 00:18:42.723 "high_priority_weight": 0, 00:18:42.723 "nvme_adminq_poll_period_us": 10000, 00:18:42.723 "nvme_ioq_poll_period_us": 0, 00:18:42.723 "io_queue_requests": 0, 00:18:42.723 "delay_cmd_submit": true, 00:18:42.723 "transport_retry_count": 4, 00:18:42.723 "bdev_retry_count": 3, 00:18:42.723 "transport_ack_timeout": 0, 00:18:42.723 "ctrlr_loss_timeout_sec": 0, 00:18:42.723 "reconnect_delay_sec": 0, 00:18:42.723 "fast_io_fail_timeout_sec": 0, 00:18:42.723 "disable_auto_failback": false, 00:18:42.723 "generate_uuids": false, 00:18:42.723 "transport_tos": 0, 00:18:42.723 "nvme_error_stat": false, 00:18:42.723 "rdma_srq_size": 0, 00:18:42.723 "io_path_stat": false, 00:18:42.723 "allow_accel_sequence": false, 00:18:42.723 "rdma_max_cq_size": 0, 00:18:42.723 "rdma_cm_event_timeout_ms": 0, 00:18:42.724 "dhchap_digests": [ 00:18:42.724 "sha256", 00:18:42.724 "sha384", 00:18:42.724 "sha512" 00:18:42.724 ], 00:18:42.724 "dhchap_dhgroups": [ 00:18:42.724 "null", 00:18:42.724 "ffdhe2048", 00:18:42.724 "ffdhe3072", 00:18:42.724 "ffdhe4096", 00:18:42.724 "ffdhe6144", 00:18:42.724 "ffdhe8192" 00:18:42.724 ] 00:18:42.724 } 00:18:42.724 }, 00:18:42.724 { 00:18:42.724 "method": "bdev_nvme_set_hotplug", 00:18:42.724 "params": { 00:18:42.724 "period_us": 100000, 00:18:42.724 "enable": false 00:18:42.724 } 00:18:42.724 }, 00:18:42.724 { 00:18:42.724 "method": "bdev_malloc_create", 00:18:42.724 "params": { 00:18:42.724 "name": "malloc0", 00:18:42.724 "num_blocks": 8192, 00:18:42.724 "block_size": 4096, 00:18:42.724 "physical_block_size": 4096, 00:18:42.724 "uuid": "d86ecbf6-8032-4b10-989d-550f7b9c9856", 00:18:42.724 "optimal_io_boundary": 0, 00:18:42.724 "md_size": 0, 00:18:42.724 "dif_type": 0, 00:18:42.724 "dif_is_head_of_md": false, 00:18:42.724 "dif_pi_format": 0 00:18:42.724 } 00:18:42.724 }, 00:18:42.724 { 00:18:42.724 "method": "bdev_wait_for_examine" 00:18:42.724 } 00:18:42.724 ] 00:18:42.724 }, 00:18:42.724 { 00:18:42.724 "subsystem": "nbd", 00:18:42.724 "config": [] 00:18:42.724 }, 00:18:42.724 { 00:18:42.724 "subsystem": "scheduler", 00:18:42.724 "config": [ 00:18:42.724 { 00:18:42.724 "method": "framework_set_scheduler", 00:18:42.724 "params": { 00:18:42.724 "name": "static" 00:18:42.724 } 00:18:42.724 } 00:18:42.724 ] 00:18:42.724 }, 00:18:42.724 { 00:18:42.724 "subsystem": "nvmf", 00:18:42.724 "config": [ 00:18:42.724 { 00:18:42.724 "method": "nvmf_set_config", 00:18:42.724 "params": { 00:18:42.724 "discovery_filter": "match_any", 00:18:42.724 "admin_cmd_passthru": { 00:18:42.724 "identify_ctrlr": false 00:18:42.724 }, 00:18:42.724 "dhchap_digests": [ 00:18:42.724 "sha256", 00:18:42.724 "sha384", 00:18:42.724 "sha512" 00:18:42.724 ], 00:18:42.724 "dhchap_dhgroups": [ 00:18:42.724 "null", 00:18:42.724 "ffdhe2048", 00:18:42.724 "ffdhe3072", 00:18:42.724 "ffdhe4096", 00:18:42.724 "ffdhe6144", 00:18:42.724 "ffdhe8192" 00:18:42.724 ] 00:18:42.724 } 00:18:42.724 }, 00:18:42.724 { 00:18:42.724 "method": "nvmf_set_max_subsystems", 00:18:42.724 "params": { 00:18:42.724 "max_subsystems": 1024 00:18:42.724 } 00:18:42.724 }, 00:18:42.724 { 00:18:42.724 "method": "nvmf_set_crdt", 00:18:42.724 "params": { 00:18:42.724 "crdt1": 0, 00:18:42.724 "crdt2": 0, 00:18:42.724 "crdt3": 0 00:18:42.724 } 00:18:42.724 }, 00:18:42.724 { 00:18:42.724 "method": "nvmf_create_transport", 00:18:42.724 "params": { 00:18:42.724 "trtype": "TCP", 00:18:42.724 "max_queue_depth": 128, 00:18:42.724 "max_io_qpairs_per_ctrlr": 127, 00:18:42.724 "in_capsule_data_size": 4096, 00:18:42.724 "max_io_size": 131072, 00:18:42.724 "io_unit_size": 131072, 00:18:42.724 "max_aq_depth": 128, 00:18:42.724 "num_shared_buffers": 511, 00:18:42.724 "buf_cache_size": 4294967295, 00:18:42.724 "dif_insert_or_strip": false, 00:18:42.724 "zcopy": false, 00:18:42.724 "c2h_success": false, 00:18:42.724 "sock_priority": 0, 00:18:42.724 "abort_timeout_sec": 1, 00:18:42.724 "ack_timeout": 0, 00:18:42.724 "data_wr_pool_size": 0 00:18:42.724 } 00:18:42.724 }, 00:18:42.724 { 00:18:42.724 "method": "nvmf_create_subsystem", 00:18:42.724 "params": { 00:18:42.724 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.724 "allow_any_host": false, 00:18:42.724 "serial_number": "00000000000000000000", 00:18:42.724 "model_number": "SPDK bdev Controller", 00:18:42.724 "max_namespaces": 32, 00:18:42.724 "min_cntlid": 1, 00:18:42.724 "max_cntlid": 65519, 00:18:42.724 "ana_reporting": false 00:18:42.724 } 00:18:42.724 }, 00:18:42.724 { 00:18:42.724 "method": "nvmf_subsystem_add_host", 00:18:42.724 "params": { 00:18:42.724 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.724 "host": "nqn.2016-06.io.spdk:host1", 00:18:42.724 "psk": "key0" 00:18:42.724 } 00:18:42.724 }, 00:18:42.724 { 00:18:42.724 "method": "nvmf_subsystem_add_ns", 00:18:42.724 "params": { 00:18:42.724 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.724 "namespace": { 00:18:42.724 "nsid": 1, 00:18:42.724 "bdev_name": "malloc0", 00:18:42.724 "nguid": "D86ECBF680324B10989D550F7B9C9856", 00:18:42.724 "uuid": "d86ecbf6-8032-4b10-989d-550f7b9c9856", 00:18:42.724 "no_auto_visible": false 00:18:42.724 } 00:18:42.724 } 00:18:42.724 }, 00:18:42.724 { 00:18:42.724 "method": "nvmf_subsystem_add_listener", 00:18:42.724 "params": { 00:18:42.724 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.724 "listen_address": { 00:18:42.724 "trtype": "TCP", 00:18:42.724 "adrfam": "IPv4", 00:18:42.724 "traddr": "10.0.0.2", 00:18:42.724 "trsvcid": "4420" 00:18:42.724 }, 00:18:42.724 "secure_channel": false, 00:18:42.724 "sock_impl": "ssl" 00:18:42.724 } 00:18:42.724 } 00:18:42.724 ] 00:18:42.724 } 00:18:42.724 ] 00:18:42.724 }' 00:18:42.724 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.724 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.724 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2952938 00:18:42.724 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:42.724 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2952938 00:18:42.724 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2952938 ']' 00:18:42.724 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.724 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.724 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.724 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.724 11:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.724 [2024-11-15 11:37:23.081378] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:18:42.724 [2024-11-15 11:37:23.081480] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.983 [2024-11-15 11:37:23.151345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.983 [2024-11-15 11:37:23.203120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.983 [2024-11-15 11:37:23.203178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.983 [2024-11-15 11:37:23.203205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.983 [2024-11-15 11:37:23.203216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.983 [2024-11-15 11:37:23.203225] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.983 [2024-11-15 11:37:23.203860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.241 [2024-11-15 11:37:23.442794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.241 [2024-11-15 11:37:23.474825] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:43.241 [2024-11-15 11:37:23.475064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.808 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.808 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:43.808 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:43.808 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:43.808 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.808 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.808 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2953091 00:18:43.808 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2953091 /var/tmp/bdevperf.sock 00:18:43.808 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2953091 ']' 00:18:43.808 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.808 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:43.808 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.808 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.808 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:43.808 "subsystems": [ 00:18:43.808 { 00:18:43.808 "subsystem": "keyring", 00:18:43.808 "config": [ 00:18:43.808 { 00:18:43.808 "method": "keyring_file_add_key", 00:18:43.808 "params": { 00:18:43.808 "name": "key0", 00:18:43.808 "path": "/tmp/tmp.2HXtexgyle" 00:18:43.808 } 00:18:43.808 } 00:18:43.808 ] 00:18:43.808 }, 00:18:43.808 { 00:18:43.808 "subsystem": "iobuf", 00:18:43.808 "config": [ 00:18:43.808 { 00:18:43.808 "method": "iobuf_set_options", 00:18:43.808 "params": { 00:18:43.808 "small_pool_count": 8192, 00:18:43.808 "large_pool_count": 1024, 00:18:43.808 "small_bufsize": 8192, 00:18:43.808 "large_bufsize": 135168, 00:18:43.808 "enable_numa": false 00:18:43.808 } 00:18:43.808 } 00:18:43.808 ] 00:18:43.808 }, 00:18:43.808 { 00:18:43.808 "subsystem": "sock", 00:18:43.808 "config": [ 00:18:43.808 { 00:18:43.808 "method": "sock_set_default_impl", 00:18:43.808 "params": { 00:18:43.808 "impl_name": "posix" 00:18:43.808 } 00:18:43.808 }, 00:18:43.808 { 00:18:43.808 "method": "sock_impl_set_options", 00:18:43.808 "params": { 00:18:43.808 "impl_name": "ssl", 00:18:43.808 "recv_buf_size": 4096, 00:18:43.808 "send_buf_size": 4096, 00:18:43.808 "enable_recv_pipe": true, 00:18:43.808 "enable_quickack": false, 00:18:43.808 "enable_placement_id": 0, 00:18:43.808 "enable_zerocopy_send_server": true, 00:18:43.808 "enable_zerocopy_send_client": false, 00:18:43.808 "zerocopy_threshold": 0, 00:18:43.808 "tls_version": 0, 00:18:43.808 "enable_ktls": false 00:18:43.808 } 00:18:43.808 }, 00:18:43.808 { 00:18:43.808 "method": "sock_impl_set_options", 00:18:43.808 "params": { 00:18:43.808 "impl_name": "posix", 00:18:43.808 "recv_buf_size": 2097152, 00:18:43.808 "send_buf_size": 2097152, 00:18:43.808 "enable_recv_pipe": true, 00:18:43.808 "enable_quickack": false, 00:18:43.808 "enable_placement_id": 0, 00:18:43.808 "enable_zerocopy_send_server": true, 00:18:43.808 "enable_zerocopy_send_client": false, 00:18:43.808 "zerocopy_threshold": 0, 00:18:43.808 "tls_version": 0, 00:18:43.808 "enable_ktls": false 00:18:43.808 } 00:18:43.808 } 00:18:43.808 ] 00:18:43.808 }, 00:18:43.808 { 00:18:43.808 "subsystem": "vmd", 00:18:43.808 "config": [] 00:18:43.808 }, 00:18:43.808 { 00:18:43.808 "subsystem": "accel", 00:18:43.808 "config": [ 00:18:43.808 { 00:18:43.808 "method": "accel_set_options", 00:18:43.808 "params": { 00:18:43.808 "small_cache_size": 128, 00:18:43.808 "large_cache_size": 16, 00:18:43.808 "task_count": 2048, 00:18:43.808 "sequence_count": 2048, 00:18:43.808 "buf_count": 2048 00:18:43.808 } 00:18:43.808 } 00:18:43.808 ] 00:18:43.808 }, 00:18:43.808 { 00:18:43.808 "subsystem": "bdev", 00:18:43.808 "config": [ 00:18:43.808 { 00:18:43.808 "method": "bdev_set_options", 00:18:43.808 "params": { 00:18:43.808 "bdev_io_pool_size": 65535, 00:18:43.808 "bdev_io_cache_size": 256, 00:18:43.808 "bdev_auto_examine": true, 00:18:43.808 "iobuf_small_cache_size": 128, 00:18:43.808 "iobuf_large_cache_size": 16 00:18:43.808 } 00:18:43.808 }, 00:18:43.808 { 00:18:43.808 "method": "bdev_raid_set_options", 00:18:43.808 "params": { 00:18:43.808 "process_window_size_kb": 1024, 00:18:43.808 "process_max_bandwidth_mb_sec": 0 00:18:43.808 } 00:18:43.808 }, 00:18:43.808 { 00:18:43.808 "method": "bdev_iscsi_set_options", 00:18:43.808 "params": { 00:18:43.808 "timeout_sec": 30 00:18:43.808 } 00:18:43.808 }, 00:18:43.808 { 00:18:43.808 "method": "bdev_nvme_set_options", 00:18:43.808 "params": { 00:18:43.808 "action_on_timeout": "none", 00:18:43.808 "timeout_us": 0, 00:18:43.808 "timeout_admin_us": 0, 00:18:43.808 "keep_alive_timeout_ms": 10000, 00:18:43.808 "arbitration_burst": 0, 00:18:43.808 "low_priority_weight": 0, 00:18:43.808 "medium_priority_weight": 0, 00:18:43.808 "high_priority_weight": 0, 00:18:43.808 "nvme_adminq_poll_period_us": 10000, 00:18:43.808 "nvme_ioq_poll_period_us": 0, 00:18:43.808 "io_queue_requests": 512, 00:18:43.808 "delay_cmd_submit": true, 00:18:43.808 "transport_retry_count": 4, 00:18:43.808 "bdev_retry_count": 3, 00:18:43.808 "transport_ack_timeout": 0, 00:18:43.808 "ctrlr_loss_timeout_sec": 0, 00:18:43.808 "reconnect_delay_sec": 0, 00:18:43.808 "fast_io_fail_timeout_sec": 0, 00:18:43.808 "disable_auto_failback": false, 00:18:43.808 "generate_uuids": false, 00:18:43.808 "transport_tos": 0, 00:18:43.808 "nvme_error_stat": false, 00:18:43.808 "rdma_srq_size": 0, 00:18:43.808 "io_path_stat": false, 00:18:43.808 "allow_accel_sequence": false, 00:18:43.808 "rdma_max_cq_size": 0, 00:18:43.808 "rdma_cm_event_timeout_ms": 0, 00:18:43.808 "dhchap_digests": [ 00:18:43.808 "sha256", 00:18:43.808 "sha384", 00:18:43.808 "sha512" 00:18:43.808 ], 00:18:43.808 "dhchap_dhgroups": [ 00:18:43.808 "null", 00:18:43.808 "ffdhe2048", 00:18:43.808 "ffdhe3072", 00:18:43.808 "ffdhe4096", 00:18:43.808 "ffdhe6144", 00:18:43.808 "ffdhe8192" 00:18:43.808 ] 00:18:43.808 } 00:18:43.808 }, 00:18:43.808 { 00:18:43.808 "method": "bdev_nvme_attach_controller", 00:18:43.808 "params": { 00:18:43.808 "name": "nvme0", 00:18:43.808 "trtype": "TCP", 00:18:43.808 "adrfam": "IPv4", 00:18:43.808 "traddr": "10.0.0.2", 00:18:43.808 "trsvcid": "4420", 00:18:43.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.808 "prchk_reftag": false, 00:18:43.808 "prchk_guard": false, 00:18:43.808 "ctrlr_loss_timeout_sec": 0, 00:18:43.808 "reconnect_delay_sec": 0, 00:18:43.808 "fast_io_fail_timeout_sec": 0, 00:18:43.808 "psk": "key0", 00:18:43.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:43.808 "hdgst": false, 00:18:43.808 "ddgst": false, 00:18:43.808 "multipath": "multipath" 00:18:43.808 } 00:18:43.808 }, 00:18:43.808 { 00:18:43.808 "method": "bdev_nvme_set_hotplug", 00:18:43.808 "params": { 00:18:43.808 "period_us": 100000, 00:18:43.808 "enable": false 00:18:43.808 } 00:18:43.808 }, 00:18:43.808 { 00:18:43.808 "method": "bdev_enable_histogram", 00:18:43.808 "params": { 00:18:43.808 "name": "nvme0n1", 00:18:43.808 "enable": true 00:18:43.808 } 00:18:43.808 }, 00:18:43.808 { 00:18:43.808 "method": "bdev_wait_for_examine" 00:18:43.809 } 00:18:43.809 ] 00:18:43.809 }, 00:18:43.809 { 00:18:43.809 "subsystem": "nbd", 00:18:43.809 "config": [] 00:18:43.809 } 00:18:43.809 ] 00:18:43.809 }' 00:18:43.809 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.809 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.809 [2024-11-15 11:37:24.168983] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:18:43.809 [2024-11-15 11:37:24.169065] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2953091 ] 00:18:44.066 [2024-11-15 11:37:24.234412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.066 [2024-11-15 11:37:24.292027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.066 [2024-11-15 11:37:24.462442] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:44.323 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.323 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:44.323 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:44.323 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:44.580 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.580 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:44.580 Running I/O for 1 seconds... 00:18:45.952 3478.00 IOPS, 13.59 MiB/s 00:18:45.952 Latency(us) 00:18:45.952 [2024-11-15T10:37:26.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.952 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:45.952 Verification LBA range: start 0x0 length 0x2000 00:18:45.952 nvme0n1 : 1.02 3531.84 13.80 0.00 0.00 35880.44 7961.41 43496.49 00:18:45.952 [2024-11-15T10:37:26.379Z] =================================================================================================================== 00:18:45.952 [2024-11-15T10:37:26.379Z] Total : 3531.84 13.80 0.00 0.00 35880.44 7961.41 43496.49 00:18:45.952 { 00:18:45.952 "results": [ 00:18:45.952 { 00:18:45.952 "job": "nvme0n1", 00:18:45.952 "core_mask": "0x2", 00:18:45.952 "workload": "verify", 00:18:45.952 "status": "finished", 00:18:45.952 "verify_range": { 00:18:45.952 "start": 0, 00:18:45.952 "length": 8192 00:18:45.952 }, 00:18:45.952 "queue_depth": 128, 00:18:45.952 "io_size": 4096, 00:18:45.952 "runtime": 1.020998, 00:18:45.952 "iops": 3531.8384560988366, 00:18:45.952 "mibps": 13.79624396913608, 00:18:45.952 "io_failed": 0, 00:18:45.952 "io_timeout": 0, 00:18:45.952 "avg_latency_us": 35880.438334463135, 00:18:45.952 "min_latency_us": 7961.41037037037, 00:18:45.952 "max_latency_us": 43496.485925925925 00:18:45.952 } 00:18:45.952 ], 00:18:45.952 "core_count": 1 00:18:45.952 } 00:18:45.952 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:45.952 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:45.952 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:45.952 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:45.952 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:45.952 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:45.952 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:45.952 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:45.952 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:45.952 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:45.952 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:45.952 nvmf_trace.0 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2953091 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2953091 ']' 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2953091 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2953091 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2953091' 00:18:45.953 killing process with pid 2953091 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2953091 00:18:45.953 Received shutdown signal, test time was about 1.000000 seconds 00:18:45.953 00:18:45.953 Latency(us) 00:18:45.953 [2024-11-15T10:37:26.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.953 [2024-11-15T10:37:26.380Z] =================================================================================================================== 00:18:45.953 [2024-11-15T10:37:26.380Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2953091 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:45.953 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:45.953 rmmod nvme_tcp 00:18:45.953 rmmod nvme_fabrics 00:18:45.953 rmmod nvme_keyring 00:18:46.210 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:46.210 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:46.210 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:46.210 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2952938 ']' 00:18:46.210 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2952938 00:18:46.210 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2952938 ']' 00:18:46.210 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2952938 00:18:46.210 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:46.210 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.210 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2952938 00:18:46.210 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.210 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.210 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2952938' 00:18:46.210 killing process with pid 2952938 00:18:46.210 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2952938 00:18:46.210 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2952938 00:18:46.470 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:46.470 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:46.470 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:46.470 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:46.470 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:46.470 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:46.470 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:46.470 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:46.470 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:46.470 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.470 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.470 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.377 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:48.377 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.kQWJGLj1az /tmp/tmp.kkV2MsUGGx /tmp/tmp.2HXtexgyle 00:18:48.377 00:18:48.377 real 1m22.692s 00:18:48.377 user 2m18.701s 00:18:48.377 sys 0m24.854s 00:18:48.377 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.377 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.377 ************************************ 00:18:48.377 END TEST nvmf_tls 00:18:48.377 ************************************ 00:18:48.377 11:37:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:48.377 11:37:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:48.377 11:37:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.377 11:37:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:48.377 ************************************ 00:18:48.377 START TEST nvmf_fips 00:18:48.377 ************************************ 00:18:48.377 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:48.636 * Looking for test storage... 00:18:48.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:48.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.636 --rc genhtml_branch_coverage=1 00:18:48.636 --rc genhtml_function_coverage=1 00:18:48.636 --rc genhtml_legend=1 00:18:48.636 --rc geninfo_all_blocks=1 00:18:48.636 --rc geninfo_unexecuted_blocks=1 00:18:48.636 00:18:48.636 ' 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:48.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.636 --rc genhtml_branch_coverage=1 00:18:48.636 --rc genhtml_function_coverage=1 00:18:48.636 --rc genhtml_legend=1 00:18:48.636 --rc geninfo_all_blocks=1 00:18:48.636 --rc geninfo_unexecuted_blocks=1 00:18:48.636 00:18:48.636 ' 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:48.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.636 --rc genhtml_branch_coverage=1 00:18:48.636 --rc genhtml_function_coverage=1 00:18:48.636 --rc genhtml_legend=1 00:18:48.636 --rc geninfo_all_blocks=1 00:18:48.636 --rc geninfo_unexecuted_blocks=1 00:18:48.636 00:18:48.636 ' 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:48.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.636 --rc genhtml_branch_coverage=1 00:18:48.636 --rc genhtml_function_coverage=1 00:18:48.636 --rc genhtml_legend=1 00:18:48.636 --rc geninfo_all_blocks=1 00:18:48.636 --rc geninfo_unexecuted_blocks=1 00:18:48.636 00:18:48.636 ' 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.636 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:48.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:48.637 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:48.637 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:48.897 Error setting digest 00:18:48.897 4092AB45C87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:48.897 4092AB45C87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:48.897 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:48.897 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:48.897 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:48.897 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:48.897 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:48.897 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:48.897 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.897 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:48.897 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:48.897 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:48.897 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.897 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.897 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.897 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:48.897 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:48.897 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:48.897 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:50.797 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:50.797 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:50.797 Found net devices under 0000:09:00.0: cvl_0_0 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.797 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:50.798 Found net devices under 0000:09:00.1: cvl_0_1 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:50.798 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:51.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:18:51.056 00:18:51.056 --- 10.0.0.2 ping statistics --- 00:18:51.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.056 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:51.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:18:51.056 00:18:51.056 --- 10.0.0.1 ping statistics --- 00:18:51.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.056 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2955326 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2955326 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2955326 ']' 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.056 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:51.056 [2024-11-15 11:37:31.344195] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:18:51.056 [2024-11-15 11:37:31.344277] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.056 [2024-11-15 11:37:31.415243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.056 [2024-11-15 11:37:31.469602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.056 [2024-11-15 11:37:31.469659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.056 [2024-11-15 11:37:31.469685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.056 [2024-11-15 11:37:31.469697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.056 [2024-11-15 11:37:31.469707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.056 [2024-11-15 11:37:31.470252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.314 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.314 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:51.314 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:51.314 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.314 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:51.314 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.314 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:51.314 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:51.314 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:51.314 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.jUJ 00:18:51.314 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:51.314 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.jUJ 00:18:51.314 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.jUJ 00:18:51.314 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.jUJ 00:18:51.314 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.572 [2024-11-15 11:37:31.908036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.572 [2024-11-15 11:37:31.924059] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:51.572 [2024-11-15 11:37:31.924324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.572 malloc0 00:18:51.572 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:51.572 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2955361 00:18:51.572 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:51.572 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2955361 /var/tmp/bdevperf.sock 00:18:51.572 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2955361 ']' 00:18:51.572 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.572 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.572 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.572 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.572 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:51.830 [2024-11-15 11:37:32.063016] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:18:51.830 [2024-11-15 11:37:32.063108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2955361 ] 00:18:51.830 [2024-11-15 11:37:32.131798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.830 [2024-11-15 11:37:32.188916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.087 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.087 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:52.087 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.jUJ 00:18:52.346 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:52.603 [2024-11-15 11:37:32.819644] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.603 TLSTESTn1 00:18:52.603 11:37:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:52.603 Running I/O for 10 seconds... 00:18:54.905 3477.00 IOPS, 13.58 MiB/s [2024-11-15T10:37:36.264Z] 3499.50 IOPS, 13.67 MiB/s [2024-11-15T10:37:37.196Z] 3477.67 IOPS, 13.58 MiB/s [2024-11-15T10:37:38.127Z] 3498.50 IOPS, 13.67 MiB/s [2024-11-15T10:37:39.062Z] 3486.40 IOPS, 13.62 MiB/s [2024-11-15T10:37:40.064Z] 3490.50 IOPS, 13.63 MiB/s [2024-11-15T10:37:41.041Z] 3496.14 IOPS, 13.66 MiB/s [2024-11-15T10:37:42.411Z] 3491.62 IOPS, 13.64 MiB/s [2024-11-15T10:37:43.344Z] 3498.56 IOPS, 13.67 MiB/s [2024-11-15T10:37:43.344Z] 3503.50 IOPS, 13.69 MiB/s 00:19:02.917 Latency(us) 00:19:02.917 [2024-11-15T10:37:43.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.917 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:02.917 Verification LBA range: start 0x0 length 0x2000 00:19:02.917 TLSTESTn1 : 10.02 3509.38 13.71 0.00 0.00 36414.04 6747.78 30292.20 00:19:02.917 [2024-11-15T10:37:43.344Z] =================================================================================================================== 00:19:02.917 [2024-11-15T10:37:43.344Z] Total : 3509.38 13.71 0.00 0.00 36414.04 6747.78 30292.20 00:19:02.917 { 00:19:02.917 "results": [ 00:19:02.917 { 00:19:02.917 "job": "TLSTESTn1", 00:19:02.917 "core_mask": "0x4", 00:19:02.917 "workload": "verify", 00:19:02.917 "status": "finished", 00:19:02.917 "verify_range": { 00:19:02.917 "start": 0, 00:19:02.917 "length": 8192 00:19:02.917 }, 00:19:02.917 "queue_depth": 128, 00:19:02.917 "io_size": 4096, 00:19:02.917 "runtime": 10.019159, 00:19:02.917 "iops": 3509.3763857824792, 00:19:02.917 "mibps": 13.70850150696281, 00:19:02.917 "io_failed": 0, 00:19:02.917 "io_timeout": 0, 00:19:02.917 "avg_latency_us": 36414.0408003396, 00:19:02.917 "min_latency_us": 6747.780740740741, 00:19:02.917 "max_latency_us": 30292.195555555554 00:19:02.917 } 00:19:02.917 ], 00:19:02.917 "core_count": 1 00:19:02.917 } 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:02.917 nvmf_trace.0 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2955361 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2955361 ']' 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2955361 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2955361 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2955361' 00:19:02.917 killing process with pid 2955361 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2955361 00:19:02.917 Received shutdown signal, test time was about 10.000000 seconds 00:19:02.917 00:19:02.917 Latency(us) 00:19:02.917 [2024-11-15T10:37:43.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.917 [2024-11-15T10:37:43.344Z] =================================================================================================================== 00:19:02.917 [2024-11-15T10:37:43.344Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.917 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2955361 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:03.174 rmmod nvme_tcp 00:19:03.174 rmmod nvme_fabrics 00:19:03.174 rmmod nvme_keyring 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2955326 ']' 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2955326 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2955326 ']' 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2955326 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2955326 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2955326' 00:19:03.174 killing process with pid 2955326 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2955326 00:19:03.174 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2955326 00:19:03.433 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:03.433 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:03.433 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:03.433 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:03.433 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:03.433 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:03.433 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:03.433 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:03.433 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:03.433 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.433 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.433 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.961 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:05.961 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.jUJ 00:19:05.961 00:19:05.961 real 0m16.994s 00:19:05.961 user 0m22.543s 00:19:05.961 sys 0m5.398s 00:19:05.961 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.961 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:05.961 ************************************ 00:19:05.961 END TEST nvmf_fips 00:19:05.961 ************************************ 00:19:05.961 11:37:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:05.961 11:37:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:05.961 11:37:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.961 11:37:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:05.961 ************************************ 00:19:05.961 START TEST nvmf_control_msg_list 00:19:05.961 ************************************ 00:19:05.961 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:05.961 * Looking for test storage... 00:19:05.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:05.961 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:05.961 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:05.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.962 --rc genhtml_branch_coverage=1 00:19:05.962 --rc genhtml_function_coverage=1 00:19:05.962 --rc genhtml_legend=1 00:19:05.962 --rc geninfo_all_blocks=1 00:19:05.962 --rc geninfo_unexecuted_blocks=1 00:19:05.962 00:19:05.962 ' 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:05.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.962 --rc genhtml_branch_coverage=1 00:19:05.962 --rc genhtml_function_coverage=1 00:19:05.962 --rc genhtml_legend=1 00:19:05.962 --rc geninfo_all_blocks=1 00:19:05.962 --rc geninfo_unexecuted_blocks=1 00:19:05.962 00:19:05.962 ' 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:05.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.962 --rc genhtml_branch_coverage=1 00:19:05.962 --rc genhtml_function_coverage=1 00:19:05.962 --rc genhtml_legend=1 00:19:05.962 --rc geninfo_all_blocks=1 00:19:05.962 --rc geninfo_unexecuted_blocks=1 00:19:05.962 00:19:05.962 ' 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:05.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:05.962 --rc genhtml_branch_coverage=1 00:19:05.962 --rc genhtml_function_coverage=1 00:19:05.962 --rc genhtml_legend=1 00:19:05.962 --rc geninfo_all_blocks=1 00:19:05.962 --rc geninfo_unexecuted_blocks=1 00:19:05.962 00:19:05.962 ' 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:05.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:05.962 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:05.963 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:05.963 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:05.963 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:05.963 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.963 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:05.963 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.963 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:05.963 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:05.963 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:05.963 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:07.865 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:07.866 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:07.866 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:07.866 Found net devices under 0000:09:00.0: cvl_0_0 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:07.866 Found net devices under 0000:09:00.1: cvl_0_1 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:07.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:19:07.866 00:19:07.866 --- 10.0.0.2 ping statistics --- 00:19:07.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.866 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:19:07.866 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:07.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:19:07.867 00:19:07.867 --- 10.0.0.1 ping statistics --- 00:19:07.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.867 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2958743 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2958743 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2958743 ']' 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.867 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:08.125 [2024-11-15 11:37:48.308522] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:19:08.125 [2024-11-15 11:37:48.308615] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.125 [2024-11-15 11:37:48.380924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.125 [2024-11-15 11:37:48.433536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.125 [2024-11-15 11:37:48.433595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.125 [2024-11-15 11:37:48.433622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.126 [2024-11-15 11:37:48.433633] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.126 [2024-11-15 11:37:48.433642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.126 [2024-11-15 11:37:48.434221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.126 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.126 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:08.126 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:08.126 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.126 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:08.383 [2024-11-15 11:37:48.572691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:08.383 Malloc0 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:08.383 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.384 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:08.384 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.384 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:08.384 [2024-11-15 11:37:48.611474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.384 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.384 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2958765 00:19:08.384 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:08.384 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2958766 00:19:08.384 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:08.384 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2958767 00:19:08.384 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2958765 00:19:08.384 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:08.384 [2024-11-15 11:37:48.670012] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:08.384 [2024-11-15 11:37:48.679967] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:08.384 [2024-11-15 11:37:48.680175] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:09.756 Initializing NVMe Controllers 00:19:09.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:09.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:09.756 Initialization complete. Launching workers. 00:19:09.756 ======================================================== 00:19:09.756 Latency(us) 00:19:09.756 Device Information : IOPS MiB/s Average min max 00:19:09.756 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4475.00 17.48 223.00 152.75 398.20 00:19:09.756 ======================================================== 00:19:09.756 Total : 4475.00 17.48 223.00 152.75 398.20 00:19:09.756 00:19:09.756 Initializing NVMe Controllers 00:19:09.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:09.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:09.756 Initialization complete. Launching workers. 00:19:09.756 ======================================================== 00:19:09.756 Latency(us) 00:19:09.756 Device Information : IOPS MiB/s Average min max 00:19:09.756 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4393.00 17.16 227.26 161.08 399.33 00:19:09.756 ======================================================== 00:19:09.756 Total : 4393.00 17.16 227.26 161.08 399.33 00:19:09.756 00:19:09.756 Initializing NVMe Controllers 00:19:09.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:09.756 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:09.756 Initialization complete. Launching workers. 00:19:09.756 ======================================================== 00:19:09.756 Latency(us) 00:19:09.756 Device Information : IOPS MiB/s Average min max 00:19:09.756 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41131.60 40777.15 41983.07 00:19:09.756 ======================================================== 00:19:09.756 Total : 25.00 0.10 41131.60 40777.15 41983.07 00:19:09.756 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2958766 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2958767 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:09.757 rmmod nvme_tcp 00:19:09.757 rmmod nvme_fabrics 00:19:09.757 rmmod nvme_keyring 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2958743 ']' 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2958743 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2958743 ']' 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2958743 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2958743 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2958743' 00:19:09.757 killing process with pid 2958743 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2958743 00:19:09.757 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2958743 00:19:09.757 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:09.757 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:09.757 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:09.757 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:09.757 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:09.757 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:09.757 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:09.757 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:09.757 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:09.757 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.757 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.757 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:12.290 00:19:12.290 real 0m6.310s 00:19:12.290 user 0m5.454s 00:19:12.290 sys 0m2.615s 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:12.290 ************************************ 00:19:12.290 END TEST nvmf_control_msg_list 00:19:12.290 ************************************ 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:12.290 ************************************ 00:19:12.290 START TEST nvmf_wait_for_buf 00:19:12.290 ************************************ 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:12.290 * Looking for test storage... 00:19:12.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:12.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.290 --rc genhtml_branch_coverage=1 00:19:12.290 --rc genhtml_function_coverage=1 00:19:12.290 --rc genhtml_legend=1 00:19:12.290 --rc geninfo_all_blocks=1 00:19:12.290 --rc geninfo_unexecuted_blocks=1 00:19:12.290 00:19:12.290 ' 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:12.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.290 --rc genhtml_branch_coverage=1 00:19:12.290 --rc genhtml_function_coverage=1 00:19:12.290 --rc genhtml_legend=1 00:19:12.290 --rc geninfo_all_blocks=1 00:19:12.290 --rc geninfo_unexecuted_blocks=1 00:19:12.290 00:19:12.290 ' 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:12.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.290 --rc genhtml_branch_coverage=1 00:19:12.290 --rc genhtml_function_coverage=1 00:19:12.290 --rc genhtml_legend=1 00:19:12.290 --rc geninfo_all_blocks=1 00:19:12.290 --rc geninfo_unexecuted_blocks=1 00:19:12.290 00:19:12.290 ' 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:12.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.290 --rc genhtml_branch_coverage=1 00:19:12.290 --rc genhtml_function_coverage=1 00:19:12.290 --rc genhtml_legend=1 00:19:12.290 --rc geninfo_all_blocks=1 00:19:12.290 --rc geninfo_unexecuted_blocks=1 00:19:12.290 00:19:12.290 ' 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.290 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:12.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:12.291 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.191 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.191 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:14.191 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:14.191 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:14.192 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:14.192 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:14.192 Found net devices under 0000:09:00.0: cvl_0_0 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:14.192 Found net devices under 0000:09:00.1: cvl_0_1 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.192 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:14.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:19:14.193 00:19:14.193 --- 10.0.0.2 ping statistics --- 00:19:14.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.193 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:19:14.193 00:19:14.193 --- 10.0.0.1 ping statistics --- 00:19:14.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.193 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2960845 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2960845 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2960845 ']' 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.193 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.451 [2024-11-15 11:37:54.647923] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:19:14.451 [2024-11-15 11:37:54.647993] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.451 [2024-11-15 11:37:54.718490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.451 [2024-11-15 11:37:54.775070] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.451 [2024-11-15 11:37:54.775116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.451 [2024-11-15 11:37:54.775139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.451 [2024-11-15 11:37:54.775149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.451 [2024-11-15 11:37:54.775158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.451 [2024-11-15 11:37:54.775753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.451 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.451 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:14.451 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:14.451 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:14.451 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.709 11:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.709 Malloc0 00:19:14.709 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.709 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:14.709 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.709 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.709 [2024-11-15 11:37:55.010778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.709 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.709 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:14.709 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.709 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.709 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.709 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:14.709 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.709 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.709 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.709 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:14.709 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.709 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:14.710 [2024-11-15 11:37:55.034958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.710 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.710 11:37:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:14.710 [2024-11-15 11:37:55.116439] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:16.610 Initializing NVMe Controllers 00:19:16.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:16.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:16.610 Initialization complete. Launching workers. 00:19:16.610 ======================================================== 00:19:16.610 Latency(us) 00:19:16.610 Device Information : IOPS MiB/s Average min max 00:19:16.610 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 39.00 4.88 104559.99 31955.33 191500.83 00:19:16.610 ======================================================== 00:19:16.610 Total : 39.00 4.88 104559.99 31955.33 191500.83 00:19:16.610 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=598 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 598 -eq 0 ]] 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:16.610 rmmod nvme_tcp 00:19:16.610 rmmod nvme_fabrics 00:19:16.610 rmmod nvme_keyring 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2960845 ']' 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2960845 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2960845 ']' 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2960845 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2960845 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2960845' 00:19:16.610 killing process with pid 2960845 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2960845 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2960845 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.610 11:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.142 11:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:19.142 00:19:19.142 real 0m6.792s 00:19:19.142 user 0m3.201s 00:19:19.142 sys 0m2.041s 00:19:19.142 11:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.142 11:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:19.142 ************************************ 00:19:19.142 END TEST nvmf_wait_for_buf 00:19:19.142 ************************************ 00:19:19.142 11:37:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:19.142 11:37:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:19.142 11:37:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:19.142 11:37:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:19.142 11:37:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:19.142 11:37:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:21.046 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:21.046 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:21.046 Found net devices under 0000:09:00.0: cvl_0_0 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:21.046 Found net devices under 0000:09:00.1: cvl_0_1 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:21.046 ************************************ 00:19:21.046 START TEST nvmf_perf_adq 00:19:21.046 ************************************ 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:21.046 * Looking for test storage... 00:19:21.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:21.046 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:21.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.047 --rc genhtml_branch_coverage=1 00:19:21.047 --rc genhtml_function_coverage=1 00:19:21.047 --rc genhtml_legend=1 00:19:21.047 --rc geninfo_all_blocks=1 00:19:21.047 --rc geninfo_unexecuted_blocks=1 00:19:21.047 00:19:21.047 ' 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:21.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.047 --rc genhtml_branch_coverage=1 00:19:21.047 --rc genhtml_function_coverage=1 00:19:21.047 --rc genhtml_legend=1 00:19:21.047 --rc geninfo_all_blocks=1 00:19:21.047 --rc geninfo_unexecuted_blocks=1 00:19:21.047 00:19:21.047 ' 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:21.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.047 --rc genhtml_branch_coverage=1 00:19:21.047 --rc genhtml_function_coverage=1 00:19:21.047 --rc genhtml_legend=1 00:19:21.047 --rc geninfo_all_blocks=1 00:19:21.047 --rc geninfo_unexecuted_blocks=1 00:19:21.047 00:19:21.047 ' 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:21.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.047 --rc genhtml_branch_coverage=1 00:19:21.047 --rc genhtml_function_coverage=1 00:19:21.047 --rc genhtml_legend=1 00:19:21.047 --rc geninfo_all_blocks=1 00:19:21.047 --rc geninfo_unexecuted_blocks=1 00:19:21.047 00:19:21.047 ' 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:21.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:21.047 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:23.579 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:23.579 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:23.579 Found net devices under 0000:09:00.0: cvl_0_0 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:23.579 Found net devices under 0000:09:00.1: cvl_0_1 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:23.579 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:23.839 11:38:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:26.368 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:31.647 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:31.648 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:31.648 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:31.648 Found net devices under 0000:09:00.0: cvl_0_0 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:31.648 Found net devices under 0000:09:00.1: cvl_0_1 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:31.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:19:31.648 00:19:31.648 --- 10.0.0.2 ping statistics --- 00:19:31.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.648 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:31.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:19:31.648 00:19:31.648 --- 10.0.0.1 ping statistics --- 00:19:31.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.648 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2965681 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:31.648 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2965681 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2965681 ']' 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.649 [2024-11-15 11:38:11.437883] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:19:31.649 [2024-11-15 11:38:11.437969] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.649 [2024-11-15 11:38:11.512473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:31.649 [2024-11-15 11:38:11.571204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.649 [2024-11-15 11:38:11.571258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.649 [2024-11-15 11:38:11.571281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.649 [2024-11-15 11:38:11.571295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.649 [2024-11-15 11:38:11.571314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.649 [2024-11-15 11:38:11.572993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.649 [2024-11-15 11:38:11.573033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.649 [2024-11-15 11:38:11.573125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:31.649 [2024-11-15 11:38:11.573128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.649 [2024-11-15 11:38:11.840321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.649 Malloc1 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.649 [2024-11-15 11:38:11.902582] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2965718 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:31.649 11:38:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:33.546 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:33.546 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.546 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:33.546 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.546 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:33.546 "tick_rate": 2700000000, 00:19:33.546 "poll_groups": [ 00:19:33.546 { 00:19:33.546 "name": "nvmf_tgt_poll_group_000", 00:19:33.546 "admin_qpairs": 1, 00:19:33.546 "io_qpairs": 1, 00:19:33.546 "current_admin_qpairs": 1, 00:19:33.546 "current_io_qpairs": 1, 00:19:33.546 "pending_bdev_io": 0, 00:19:33.546 "completed_nvme_io": 18735, 00:19:33.546 "transports": [ 00:19:33.546 { 00:19:33.546 "trtype": "TCP" 00:19:33.546 } 00:19:33.546 ] 00:19:33.546 }, 00:19:33.546 { 00:19:33.546 "name": "nvmf_tgt_poll_group_001", 00:19:33.546 "admin_qpairs": 0, 00:19:33.546 "io_qpairs": 1, 00:19:33.546 "current_admin_qpairs": 0, 00:19:33.546 "current_io_qpairs": 1, 00:19:33.546 "pending_bdev_io": 0, 00:19:33.546 "completed_nvme_io": 19913, 00:19:33.546 "transports": [ 00:19:33.546 { 00:19:33.546 "trtype": "TCP" 00:19:33.546 } 00:19:33.546 ] 00:19:33.546 }, 00:19:33.546 { 00:19:33.546 "name": "nvmf_tgt_poll_group_002", 00:19:33.546 "admin_qpairs": 0, 00:19:33.546 "io_qpairs": 1, 00:19:33.546 "current_admin_qpairs": 0, 00:19:33.546 "current_io_qpairs": 1, 00:19:33.546 "pending_bdev_io": 0, 00:19:33.546 "completed_nvme_io": 20149, 00:19:33.546 "transports": [ 00:19:33.546 { 00:19:33.546 "trtype": "TCP" 00:19:33.546 } 00:19:33.546 ] 00:19:33.546 }, 00:19:33.546 { 00:19:33.546 "name": "nvmf_tgt_poll_group_003", 00:19:33.546 "admin_qpairs": 0, 00:19:33.546 "io_qpairs": 1, 00:19:33.546 "current_admin_qpairs": 0, 00:19:33.546 "current_io_qpairs": 1, 00:19:33.546 "pending_bdev_io": 0, 00:19:33.546 "completed_nvme_io": 19278, 00:19:33.546 "transports": [ 00:19:33.546 { 00:19:33.546 "trtype": "TCP" 00:19:33.546 } 00:19:33.546 ] 00:19:33.546 } 00:19:33.546 ] 00:19:33.546 }' 00:19:33.546 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:33.546 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:33.830 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:33.830 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:33.830 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2965718 00:19:41.959 Initializing NVMe Controllers 00:19:41.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:41.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:41.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:41.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:41.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:41.959 Initialization complete. Launching workers. 00:19:41.959 ======================================================== 00:19:41.959 Latency(us) 00:19:41.959 Device Information : IOPS MiB/s Average min max 00:19:41.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10132.40 39.58 6316.83 2479.61 10729.07 00:19:41.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10480.70 40.94 6107.11 2499.47 10128.15 00:19:41.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10578.10 41.32 6049.63 2042.38 10605.44 00:19:41.960 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9901.10 38.68 6465.19 2558.54 10880.22 00:19:41.960 ======================================================== 00:19:41.960 Total : 41092.30 160.52 6230.30 2042.38 10880.22 00:19:41.960 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:41.960 rmmod nvme_tcp 00:19:41.960 rmmod nvme_fabrics 00:19:41.960 rmmod nvme_keyring 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2965681 ']' 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2965681 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2965681 ']' 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2965681 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2965681 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2965681' 00:19:41.960 killing process with pid 2965681 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2965681 00:19:41.960 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2965681 00:19:42.218 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:42.218 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:42.218 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:42.218 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:42.218 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:42.218 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:42.218 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:42.218 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:42.218 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:42.218 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.218 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.218 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.119 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:44.119 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:19:44.119 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:44.119 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:44.694 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:46.593 11:38:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:51.862 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:51.862 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:51.863 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:51.863 Found net devices under 0000:09:00.0: cvl_0_0 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:51.863 Found net devices under 0000:09:00.1: cvl_0_1 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:51.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:19:51.863 00:19:51.863 --- 10.0.0.2 ping statistics --- 00:19:51.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.863 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:19:51.863 00:19:51.863 --- 10.0.0.1 ping statistics --- 00:19:51.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.863 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:51.863 net.core.busy_poll = 1 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:51.863 net.core.busy_read = 1 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:51.863 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:52.121 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:52.121 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.121 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:52.121 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.121 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2968334 00:19:52.121 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:52.121 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2968334 00:19:52.121 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2968334 ']' 00:19:52.121 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.121 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.121 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.121 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.121 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.121 [2024-11-15 11:38:32.350164] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:19:52.121 [2024-11-15 11:38:32.350236] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.121 [2024-11-15 11:38:32.429469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.121 [2024-11-15 11:38:32.491448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.121 [2024-11-15 11:38:32.491494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.121 [2024-11-15 11:38:32.491519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.121 [2024-11-15 11:38:32.491530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.121 [2024-11-15 11:38:32.491541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.121 [2024-11-15 11:38:32.493031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.121 [2024-11-15 11:38:32.493084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.121 [2024-11-15 11:38:32.493159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.121 [2024-11-15 11:38:32.493162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.379 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.379 [2024-11-15 11:38:32.763758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.380 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.380 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:52.380 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.380 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.637 Malloc1 00:19:52.637 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.637 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:52.637 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.637 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.637 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.637 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:52.637 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.637 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.637 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.637 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.638 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.638 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:52.638 [2024-11-15 11:38:32.825861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.638 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.638 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2968483 00:19:52.638 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:19:52.638 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:54.535 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:19:54.535 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.535 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:54.535 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.535 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:19:54.535 "tick_rate": 2700000000, 00:19:54.535 "poll_groups": [ 00:19:54.535 { 00:19:54.535 "name": "nvmf_tgt_poll_group_000", 00:19:54.535 "admin_qpairs": 1, 00:19:54.535 "io_qpairs": 2, 00:19:54.535 "current_admin_qpairs": 1, 00:19:54.535 "current_io_qpairs": 2, 00:19:54.535 "pending_bdev_io": 0, 00:19:54.535 "completed_nvme_io": 27089, 00:19:54.535 "transports": [ 00:19:54.535 { 00:19:54.535 "trtype": "TCP" 00:19:54.535 } 00:19:54.535 ] 00:19:54.535 }, 00:19:54.535 { 00:19:54.535 "name": "nvmf_tgt_poll_group_001", 00:19:54.535 "admin_qpairs": 0, 00:19:54.535 "io_qpairs": 2, 00:19:54.535 "current_admin_qpairs": 0, 00:19:54.535 "current_io_qpairs": 2, 00:19:54.535 "pending_bdev_io": 0, 00:19:54.535 "completed_nvme_io": 25579, 00:19:54.535 "transports": [ 00:19:54.535 { 00:19:54.535 "trtype": "TCP" 00:19:54.535 } 00:19:54.535 ] 00:19:54.535 }, 00:19:54.535 { 00:19:54.535 "name": "nvmf_tgt_poll_group_002", 00:19:54.535 "admin_qpairs": 0, 00:19:54.535 "io_qpairs": 0, 00:19:54.535 "current_admin_qpairs": 0, 00:19:54.535 "current_io_qpairs": 0, 00:19:54.535 "pending_bdev_io": 0, 00:19:54.535 "completed_nvme_io": 0, 00:19:54.535 "transports": [ 00:19:54.535 { 00:19:54.535 "trtype": "TCP" 00:19:54.535 } 00:19:54.535 ] 00:19:54.535 }, 00:19:54.535 { 00:19:54.535 "name": "nvmf_tgt_poll_group_003", 00:19:54.535 "admin_qpairs": 0, 00:19:54.535 "io_qpairs": 0, 00:19:54.535 "current_admin_qpairs": 0, 00:19:54.535 "current_io_qpairs": 0, 00:19:54.535 "pending_bdev_io": 0, 00:19:54.535 "completed_nvme_io": 0, 00:19:54.535 "transports": [ 00:19:54.535 { 00:19:54.535 "trtype": "TCP" 00:19:54.535 } 00:19:54.535 ] 00:19:54.535 } 00:19:54.535 ] 00:19:54.535 }' 00:19:54.535 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:54.535 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:19:54.535 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:19:54.535 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:19:54.535 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2968483 00:20:02.640 Initializing NVMe Controllers 00:20:02.640 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:02.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:02.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:02.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:02.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:02.640 Initialization complete. Launching workers. 00:20:02.640 ======================================================== 00:20:02.640 Latency(us) 00:20:02.640 Device Information : IOPS MiB/s Average min max 00:20:02.640 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6719.30 26.25 9528.09 1560.94 54925.76 00:20:02.640 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7090.40 27.70 9046.58 1354.56 53910.60 00:20:02.640 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6805.10 26.58 9407.14 1723.77 56918.30 00:20:02.640 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6577.90 25.69 9732.10 1012.05 54384.76 00:20:02.640 ======================================================== 00:20:02.640 Total : 27192.70 106.22 9421.62 1012.05 56918.30 00:20:02.640 00:20:02.640 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:02.640 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:02.640 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:02.640 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:02.640 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:02.640 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:02.640 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:02.640 rmmod nvme_tcp 00:20:02.640 rmmod nvme_fabrics 00:20:02.640 rmmod nvme_keyring 00:20:02.640 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:02.640 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:02.640 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:02.640 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2968334 ']' 00:20:02.640 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2968334 00:20:02.640 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2968334 ']' 00:20:02.640 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2968334 00:20:02.640 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:02.640 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.640 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2968334 00:20:02.898 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:02.898 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:02.898 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2968334' 00:20:02.898 killing process with pid 2968334 00:20:02.898 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2968334 00:20:02.898 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2968334 00:20:03.156 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:03.156 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:03.156 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:03.156 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:03.156 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:03.156 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:03.157 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:03.157 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:03.157 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:03.157 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.157 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.157 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.060 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:05.060 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:05.060 00:20:05.060 real 0m44.161s 00:20:05.060 user 2m41.243s 00:20:05.060 sys 0m8.999s 00:20:05.060 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.060 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.060 ************************************ 00:20:05.060 END TEST nvmf_perf_adq 00:20:05.060 ************************************ 00:20:05.060 11:38:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:05.060 11:38:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:05.061 11:38:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.061 11:38:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:05.061 ************************************ 00:20:05.061 START TEST nvmf_shutdown 00:20:05.061 ************************************ 00:20:05.061 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:05.320 * Looking for test storage... 00:20:05.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:05.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.320 --rc genhtml_branch_coverage=1 00:20:05.320 --rc genhtml_function_coverage=1 00:20:05.320 --rc genhtml_legend=1 00:20:05.320 --rc geninfo_all_blocks=1 00:20:05.320 --rc geninfo_unexecuted_blocks=1 00:20:05.320 00:20:05.320 ' 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:05.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.320 --rc genhtml_branch_coverage=1 00:20:05.320 --rc genhtml_function_coverage=1 00:20:05.320 --rc genhtml_legend=1 00:20:05.320 --rc geninfo_all_blocks=1 00:20:05.320 --rc geninfo_unexecuted_blocks=1 00:20:05.320 00:20:05.320 ' 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:05.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.320 --rc genhtml_branch_coverage=1 00:20:05.320 --rc genhtml_function_coverage=1 00:20:05.320 --rc genhtml_legend=1 00:20:05.320 --rc geninfo_all_blocks=1 00:20:05.320 --rc geninfo_unexecuted_blocks=1 00:20:05.320 00:20:05.320 ' 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:05.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.320 --rc genhtml_branch_coverage=1 00:20:05.320 --rc genhtml_function_coverage=1 00:20:05.320 --rc genhtml_legend=1 00:20:05.320 --rc geninfo_all_blocks=1 00:20:05.320 --rc geninfo_unexecuted_blocks=1 00:20:05.320 00:20:05.320 ' 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.320 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:05.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:05.321 ************************************ 00:20:05.321 START TEST nvmf_shutdown_tc1 00:20:05.321 ************************************ 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:05.321 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:07.219 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:07.220 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:07.220 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:07.220 Found net devices under 0000:09:00.0: cvl_0_0 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:07.220 Found net devices under 0000:09:00.1: cvl_0_1 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:07.220 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:07.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:20:07.478 00:20:07.478 --- 10.0.0.2 ping statistics --- 00:20:07.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.478 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:07.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:20:07.478 00:20:07.478 --- 10.0.0.1 ping statistics --- 00:20:07.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.478 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.478 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:07.479 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:07.479 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:07.479 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:07.479 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:07.479 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:07.479 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2971646 00:20:07.479 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:07.479 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2971646 00:20:07.479 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2971646 ']' 00:20:07.479 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.479 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.479 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.479 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.479 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:07.479 [2024-11-15 11:38:47.851894] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:20:07.479 [2024-11-15 11:38:47.851971] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.737 [2024-11-15 11:38:47.928853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:07.737 [2024-11-15 11:38:47.990904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.737 [2024-11-15 11:38:47.990959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.737 [2024-11-15 11:38:47.990972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.737 [2024-11-15 11:38:47.990983] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.737 [2024-11-15 11:38:47.990992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.737 [2024-11-15 11:38:47.992706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.737 [2024-11-15 11:38:47.992783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:07.737 [2024-11-15 11:38:47.992842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:07.737 [2024-11-15 11:38:47.992846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:07.737 [2024-11-15 11:38:48.141499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.737 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.995 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.995 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.995 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.995 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.995 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.995 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.996 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.996 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.996 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.996 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.996 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.996 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.996 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.996 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.996 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:07.996 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:07.996 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:07.996 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.996 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:07.996 Malloc1 00:20:07.996 [2024-11-15 11:38:48.238639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.996 Malloc2 00:20:07.996 Malloc3 00:20:07.996 Malloc4 00:20:07.996 Malloc5 00:20:08.252 Malloc6 00:20:08.252 Malloc7 00:20:08.252 Malloc8 00:20:08.252 Malloc9 00:20:08.252 Malloc10 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2971825 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2971825 /var/tmp/bdevperf.sock 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2971825 ']' 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:08.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.510 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.510 { 00:20:08.510 "params": { 00:20:08.510 "name": "Nvme$subsystem", 00:20:08.510 "trtype": "$TEST_TRANSPORT", 00:20:08.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.510 "adrfam": "ipv4", 00:20:08.510 "trsvcid": "$NVMF_PORT", 00:20:08.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.510 "hdgst": ${hdgst:-false}, 00:20:08.510 "ddgst": ${ddgst:-false} 00:20:08.510 }, 00:20:08.510 "method": "bdev_nvme_attach_controller" 00:20:08.510 } 00:20:08.510 EOF 00:20:08.510 )") 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.511 { 00:20:08.511 "params": { 00:20:08.511 "name": "Nvme$subsystem", 00:20:08.511 "trtype": "$TEST_TRANSPORT", 00:20:08.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.511 "adrfam": "ipv4", 00:20:08.511 "trsvcid": "$NVMF_PORT", 00:20:08.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.511 "hdgst": ${hdgst:-false}, 00:20:08.511 "ddgst": ${ddgst:-false} 00:20:08.511 }, 00:20:08.511 "method": "bdev_nvme_attach_controller" 00:20:08.511 } 00:20:08.511 EOF 00:20:08.511 )") 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.511 { 00:20:08.511 "params": { 00:20:08.511 "name": "Nvme$subsystem", 00:20:08.511 "trtype": "$TEST_TRANSPORT", 00:20:08.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.511 "adrfam": "ipv4", 00:20:08.511 "trsvcid": "$NVMF_PORT", 00:20:08.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.511 "hdgst": ${hdgst:-false}, 00:20:08.511 "ddgst": ${ddgst:-false} 00:20:08.511 }, 00:20:08.511 "method": "bdev_nvme_attach_controller" 00:20:08.511 } 00:20:08.511 EOF 00:20:08.511 )") 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.511 { 00:20:08.511 "params": { 00:20:08.511 "name": "Nvme$subsystem", 00:20:08.511 "trtype": "$TEST_TRANSPORT", 00:20:08.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.511 "adrfam": "ipv4", 00:20:08.511 "trsvcid": "$NVMF_PORT", 00:20:08.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.511 "hdgst": ${hdgst:-false}, 00:20:08.511 "ddgst": ${ddgst:-false} 00:20:08.511 }, 00:20:08.511 "method": "bdev_nvme_attach_controller" 00:20:08.511 } 00:20:08.511 EOF 00:20:08.511 )") 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.511 { 00:20:08.511 "params": { 00:20:08.511 "name": "Nvme$subsystem", 00:20:08.511 "trtype": "$TEST_TRANSPORT", 00:20:08.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.511 "adrfam": "ipv4", 00:20:08.511 "trsvcid": "$NVMF_PORT", 00:20:08.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.511 "hdgst": ${hdgst:-false}, 00:20:08.511 "ddgst": ${ddgst:-false} 00:20:08.511 }, 00:20:08.511 "method": "bdev_nvme_attach_controller" 00:20:08.511 } 00:20:08.511 EOF 00:20:08.511 )") 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.511 { 00:20:08.511 "params": { 00:20:08.511 "name": "Nvme$subsystem", 00:20:08.511 "trtype": "$TEST_TRANSPORT", 00:20:08.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.511 "adrfam": "ipv4", 00:20:08.511 "trsvcid": "$NVMF_PORT", 00:20:08.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.511 "hdgst": ${hdgst:-false}, 00:20:08.511 "ddgst": ${ddgst:-false} 00:20:08.511 }, 00:20:08.511 "method": "bdev_nvme_attach_controller" 00:20:08.511 } 00:20:08.511 EOF 00:20:08.511 )") 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.511 { 00:20:08.511 "params": { 00:20:08.511 "name": "Nvme$subsystem", 00:20:08.511 "trtype": "$TEST_TRANSPORT", 00:20:08.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.511 "adrfam": "ipv4", 00:20:08.511 "trsvcid": "$NVMF_PORT", 00:20:08.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.511 "hdgst": ${hdgst:-false}, 00:20:08.511 "ddgst": ${ddgst:-false} 00:20:08.511 }, 00:20:08.511 "method": "bdev_nvme_attach_controller" 00:20:08.511 } 00:20:08.511 EOF 00:20:08.511 )") 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.511 { 00:20:08.511 "params": { 00:20:08.511 "name": "Nvme$subsystem", 00:20:08.511 "trtype": "$TEST_TRANSPORT", 00:20:08.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.511 "adrfam": "ipv4", 00:20:08.511 "trsvcid": "$NVMF_PORT", 00:20:08.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.511 "hdgst": ${hdgst:-false}, 00:20:08.511 "ddgst": ${ddgst:-false} 00:20:08.511 }, 00:20:08.511 "method": "bdev_nvme_attach_controller" 00:20:08.511 } 00:20:08.511 EOF 00:20:08.511 )") 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.511 { 00:20:08.511 "params": { 00:20:08.511 "name": "Nvme$subsystem", 00:20:08.511 "trtype": "$TEST_TRANSPORT", 00:20:08.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.511 "adrfam": "ipv4", 00:20:08.511 "trsvcid": "$NVMF_PORT", 00:20:08.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.511 "hdgst": ${hdgst:-false}, 00:20:08.511 "ddgst": ${ddgst:-false} 00:20:08.511 }, 00:20:08.511 "method": "bdev_nvme_attach_controller" 00:20:08.511 } 00:20:08.511 EOF 00:20:08.511 )") 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.511 { 00:20:08.511 "params": { 00:20:08.511 "name": "Nvme$subsystem", 00:20:08.511 "trtype": "$TEST_TRANSPORT", 00:20:08.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.511 "adrfam": "ipv4", 00:20:08.511 "trsvcid": "$NVMF_PORT", 00:20:08.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.511 "hdgst": ${hdgst:-false}, 00:20:08.511 "ddgst": ${ddgst:-false} 00:20:08.511 }, 00:20:08.511 "method": "bdev_nvme_attach_controller" 00:20:08.511 } 00:20:08.511 EOF 00:20:08.511 )") 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:08.511 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:08.511 "params": { 00:20:08.511 "name": "Nvme1", 00:20:08.511 "trtype": "tcp", 00:20:08.511 "traddr": "10.0.0.2", 00:20:08.511 "adrfam": "ipv4", 00:20:08.511 "trsvcid": "4420", 00:20:08.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.511 "hdgst": false, 00:20:08.511 "ddgst": false 00:20:08.511 }, 00:20:08.511 "method": "bdev_nvme_attach_controller" 00:20:08.511 },{ 00:20:08.511 "params": { 00:20:08.511 "name": "Nvme2", 00:20:08.511 "trtype": "tcp", 00:20:08.511 "traddr": "10.0.0.2", 00:20:08.511 "adrfam": "ipv4", 00:20:08.511 "trsvcid": "4420", 00:20:08.511 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:08.511 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:08.511 "hdgst": false, 00:20:08.511 "ddgst": false 00:20:08.511 }, 00:20:08.511 "method": "bdev_nvme_attach_controller" 00:20:08.511 },{ 00:20:08.511 "params": { 00:20:08.511 "name": "Nvme3", 00:20:08.511 "trtype": "tcp", 00:20:08.511 "traddr": "10.0.0.2", 00:20:08.511 "adrfam": "ipv4", 00:20:08.511 "trsvcid": "4420", 00:20:08.511 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:08.511 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:08.511 "hdgst": false, 00:20:08.511 "ddgst": false 00:20:08.511 }, 00:20:08.512 "method": "bdev_nvme_attach_controller" 00:20:08.512 },{ 00:20:08.512 "params": { 00:20:08.512 "name": "Nvme4", 00:20:08.512 "trtype": "tcp", 00:20:08.512 "traddr": "10.0.0.2", 00:20:08.512 "adrfam": "ipv4", 00:20:08.512 "trsvcid": "4420", 00:20:08.512 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:08.512 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:08.512 "hdgst": false, 00:20:08.512 "ddgst": false 00:20:08.512 }, 00:20:08.512 "method": "bdev_nvme_attach_controller" 00:20:08.512 },{ 00:20:08.512 "params": { 00:20:08.512 "name": "Nvme5", 00:20:08.512 "trtype": "tcp", 00:20:08.512 "traddr": "10.0.0.2", 00:20:08.512 "adrfam": "ipv4", 00:20:08.512 "trsvcid": "4420", 00:20:08.512 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:08.512 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:08.512 "hdgst": false, 00:20:08.512 "ddgst": false 00:20:08.512 }, 00:20:08.512 "method": "bdev_nvme_attach_controller" 00:20:08.512 },{ 00:20:08.512 "params": { 00:20:08.512 "name": "Nvme6", 00:20:08.512 "trtype": "tcp", 00:20:08.512 "traddr": "10.0.0.2", 00:20:08.512 "adrfam": "ipv4", 00:20:08.512 "trsvcid": "4420", 00:20:08.512 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:08.512 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:08.512 "hdgst": false, 00:20:08.512 "ddgst": false 00:20:08.512 }, 00:20:08.512 "method": "bdev_nvme_attach_controller" 00:20:08.512 },{ 00:20:08.512 "params": { 00:20:08.512 "name": "Nvme7", 00:20:08.512 "trtype": "tcp", 00:20:08.512 "traddr": "10.0.0.2", 00:20:08.512 "adrfam": "ipv4", 00:20:08.512 "trsvcid": "4420", 00:20:08.512 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:08.512 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:08.512 "hdgst": false, 00:20:08.512 "ddgst": false 00:20:08.512 }, 00:20:08.512 "method": "bdev_nvme_attach_controller" 00:20:08.512 },{ 00:20:08.512 "params": { 00:20:08.512 "name": "Nvme8", 00:20:08.512 "trtype": "tcp", 00:20:08.512 "traddr": "10.0.0.2", 00:20:08.512 "adrfam": "ipv4", 00:20:08.512 "trsvcid": "4420", 00:20:08.512 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:08.512 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:08.512 "hdgst": false, 00:20:08.512 "ddgst": false 00:20:08.512 }, 00:20:08.512 "method": "bdev_nvme_attach_controller" 00:20:08.512 },{ 00:20:08.512 "params": { 00:20:08.512 "name": "Nvme9", 00:20:08.512 "trtype": "tcp", 00:20:08.512 "traddr": "10.0.0.2", 00:20:08.512 "adrfam": "ipv4", 00:20:08.512 "trsvcid": "4420", 00:20:08.512 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:08.512 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:08.512 "hdgst": false, 00:20:08.512 "ddgst": false 00:20:08.512 }, 00:20:08.512 "method": "bdev_nvme_attach_controller" 00:20:08.512 },{ 00:20:08.512 "params": { 00:20:08.512 "name": "Nvme10", 00:20:08.512 "trtype": "tcp", 00:20:08.512 "traddr": "10.0.0.2", 00:20:08.512 "adrfam": "ipv4", 00:20:08.512 "trsvcid": "4420", 00:20:08.512 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:08.512 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:08.512 "hdgst": false, 00:20:08.512 "ddgst": false 00:20:08.512 }, 00:20:08.512 "method": "bdev_nvme_attach_controller" 00:20:08.512 }' 00:20:08.512 [2024-11-15 11:38:48.763970] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:20:08.512 [2024-11-15 11:38:48.764062] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:08.512 [2024-11-15 11:38:48.836573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.512 [2024-11-15 11:38:48.897059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.408 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.408 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:10.408 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:10.408 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.408 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:10.408 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.408 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2971825 00:20:10.408 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:10.408 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:11.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2971825 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:11.780 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2971646 00:20:11.780 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:11.780 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:11.780 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:11.780 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:11.780 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.780 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.780 { 00:20:11.780 "params": { 00:20:11.780 "name": "Nvme$subsystem", 00:20:11.780 "trtype": "$TEST_TRANSPORT", 00:20:11.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.780 "adrfam": "ipv4", 00:20:11.780 "trsvcid": "$NVMF_PORT", 00:20:11.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.780 "hdgst": ${hdgst:-false}, 00:20:11.780 "ddgst": ${ddgst:-false} 00:20:11.780 }, 00:20:11.780 "method": "bdev_nvme_attach_controller" 00:20:11.780 } 00:20:11.780 EOF 00:20:11.780 )") 00:20:11.780 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.780 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.780 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.780 { 00:20:11.780 "params": { 00:20:11.780 "name": "Nvme$subsystem", 00:20:11.780 "trtype": "$TEST_TRANSPORT", 00:20:11.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.780 "adrfam": "ipv4", 00:20:11.780 "trsvcid": "$NVMF_PORT", 00:20:11.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.781 "hdgst": ${hdgst:-false}, 00:20:11.781 "ddgst": ${ddgst:-false} 00:20:11.781 }, 00:20:11.781 "method": "bdev_nvme_attach_controller" 00:20:11.781 } 00:20:11.781 EOF 00:20:11.781 )") 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.781 { 00:20:11.781 "params": { 00:20:11.781 "name": "Nvme$subsystem", 00:20:11.781 "trtype": "$TEST_TRANSPORT", 00:20:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.781 "adrfam": "ipv4", 00:20:11.781 "trsvcid": "$NVMF_PORT", 00:20:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.781 "hdgst": ${hdgst:-false}, 00:20:11.781 "ddgst": ${ddgst:-false} 00:20:11.781 }, 00:20:11.781 "method": "bdev_nvme_attach_controller" 00:20:11.781 } 00:20:11.781 EOF 00:20:11.781 )") 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.781 { 00:20:11.781 "params": { 00:20:11.781 "name": "Nvme$subsystem", 00:20:11.781 "trtype": "$TEST_TRANSPORT", 00:20:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.781 "adrfam": "ipv4", 00:20:11.781 "trsvcid": "$NVMF_PORT", 00:20:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.781 "hdgst": ${hdgst:-false}, 00:20:11.781 "ddgst": ${ddgst:-false} 00:20:11.781 }, 00:20:11.781 "method": "bdev_nvme_attach_controller" 00:20:11.781 } 00:20:11.781 EOF 00:20:11.781 )") 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.781 { 00:20:11.781 "params": { 00:20:11.781 "name": "Nvme$subsystem", 00:20:11.781 "trtype": "$TEST_TRANSPORT", 00:20:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.781 "adrfam": "ipv4", 00:20:11.781 "trsvcid": "$NVMF_PORT", 00:20:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.781 "hdgst": ${hdgst:-false}, 00:20:11.781 "ddgst": ${ddgst:-false} 00:20:11.781 }, 00:20:11.781 "method": "bdev_nvme_attach_controller" 00:20:11.781 } 00:20:11.781 EOF 00:20:11.781 )") 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.781 { 00:20:11.781 "params": { 00:20:11.781 "name": "Nvme$subsystem", 00:20:11.781 "trtype": "$TEST_TRANSPORT", 00:20:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.781 "adrfam": "ipv4", 00:20:11.781 "trsvcid": "$NVMF_PORT", 00:20:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.781 "hdgst": ${hdgst:-false}, 00:20:11.781 "ddgst": ${ddgst:-false} 00:20:11.781 }, 00:20:11.781 "method": "bdev_nvme_attach_controller" 00:20:11.781 } 00:20:11.781 EOF 00:20:11.781 )") 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.781 { 00:20:11.781 "params": { 00:20:11.781 "name": "Nvme$subsystem", 00:20:11.781 "trtype": "$TEST_TRANSPORT", 00:20:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.781 "adrfam": "ipv4", 00:20:11.781 "trsvcid": "$NVMF_PORT", 00:20:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.781 "hdgst": ${hdgst:-false}, 00:20:11.781 "ddgst": ${ddgst:-false} 00:20:11.781 }, 00:20:11.781 "method": "bdev_nvme_attach_controller" 00:20:11.781 } 00:20:11.781 EOF 00:20:11.781 )") 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.781 { 00:20:11.781 "params": { 00:20:11.781 "name": "Nvme$subsystem", 00:20:11.781 "trtype": "$TEST_TRANSPORT", 00:20:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.781 "adrfam": "ipv4", 00:20:11.781 "trsvcid": "$NVMF_PORT", 00:20:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.781 "hdgst": ${hdgst:-false}, 00:20:11.781 "ddgst": ${ddgst:-false} 00:20:11.781 }, 00:20:11.781 "method": "bdev_nvme_attach_controller" 00:20:11.781 } 00:20:11.781 EOF 00:20:11.781 )") 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.781 { 00:20:11.781 "params": { 00:20:11.781 "name": "Nvme$subsystem", 00:20:11.781 "trtype": "$TEST_TRANSPORT", 00:20:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.781 "adrfam": "ipv4", 00:20:11.781 "trsvcid": "$NVMF_PORT", 00:20:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.781 "hdgst": ${hdgst:-false}, 00:20:11.781 "ddgst": ${ddgst:-false} 00:20:11.781 }, 00:20:11.781 "method": "bdev_nvme_attach_controller" 00:20:11.781 } 00:20:11.781 EOF 00:20:11.781 )") 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.781 { 00:20:11.781 "params": { 00:20:11.781 "name": "Nvme$subsystem", 00:20:11.781 "trtype": "$TEST_TRANSPORT", 00:20:11.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.781 "adrfam": "ipv4", 00:20:11.781 "trsvcid": "$NVMF_PORT", 00:20:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.781 "hdgst": ${hdgst:-false}, 00:20:11.781 "ddgst": ${ddgst:-false} 00:20:11.781 }, 00:20:11.781 "method": "bdev_nvme_attach_controller" 00:20:11.781 } 00:20:11.781 EOF 00:20:11.781 )") 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:11.781 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:11.781 "params": { 00:20:11.781 "name": "Nvme1", 00:20:11.781 "trtype": "tcp", 00:20:11.781 "traddr": "10.0.0.2", 00:20:11.781 "adrfam": "ipv4", 00:20:11.781 "trsvcid": "4420", 00:20:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.781 "hdgst": false, 00:20:11.781 "ddgst": false 00:20:11.781 }, 00:20:11.781 "method": "bdev_nvme_attach_controller" 00:20:11.781 },{ 00:20:11.781 "params": { 00:20:11.781 "name": "Nvme2", 00:20:11.781 "trtype": "tcp", 00:20:11.781 "traddr": "10.0.0.2", 00:20:11.781 "adrfam": "ipv4", 00:20:11.781 "trsvcid": "4420", 00:20:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:11.781 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:11.781 "hdgst": false, 00:20:11.781 "ddgst": false 00:20:11.781 }, 00:20:11.781 "method": "bdev_nvme_attach_controller" 00:20:11.781 },{ 00:20:11.781 "params": { 00:20:11.781 "name": "Nvme3", 00:20:11.781 "trtype": "tcp", 00:20:11.781 "traddr": "10.0.0.2", 00:20:11.781 "adrfam": "ipv4", 00:20:11.781 "trsvcid": "4420", 00:20:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:11.781 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:11.781 "hdgst": false, 00:20:11.781 "ddgst": false 00:20:11.781 }, 00:20:11.781 "method": "bdev_nvme_attach_controller" 00:20:11.781 },{ 00:20:11.781 "params": { 00:20:11.781 "name": "Nvme4", 00:20:11.781 "trtype": "tcp", 00:20:11.781 "traddr": "10.0.0.2", 00:20:11.781 "adrfam": "ipv4", 00:20:11.781 "trsvcid": "4420", 00:20:11.781 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:11.781 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:11.781 "hdgst": false, 00:20:11.781 "ddgst": false 00:20:11.781 }, 00:20:11.781 "method": "bdev_nvme_attach_controller" 00:20:11.781 },{ 00:20:11.781 "params": { 00:20:11.781 "name": "Nvme5", 00:20:11.781 "trtype": "tcp", 00:20:11.781 "traddr": "10.0.0.2", 00:20:11.782 "adrfam": "ipv4", 00:20:11.782 "trsvcid": "4420", 00:20:11.782 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:11.782 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:11.782 "hdgst": false, 00:20:11.782 "ddgst": false 00:20:11.782 }, 00:20:11.782 "method": "bdev_nvme_attach_controller" 00:20:11.782 },{ 00:20:11.782 "params": { 00:20:11.782 "name": "Nvme6", 00:20:11.782 "trtype": "tcp", 00:20:11.782 "traddr": "10.0.0.2", 00:20:11.782 "adrfam": "ipv4", 00:20:11.782 "trsvcid": "4420", 00:20:11.782 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:11.782 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:11.782 "hdgst": false, 00:20:11.782 "ddgst": false 00:20:11.782 }, 00:20:11.782 "method": "bdev_nvme_attach_controller" 00:20:11.782 },{ 00:20:11.782 "params": { 00:20:11.782 "name": "Nvme7", 00:20:11.782 "trtype": "tcp", 00:20:11.782 "traddr": "10.0.0.2", 00:20:11.782 "adrfam": "ipv4", 00:20:11.782 "trsvcid": "4420", 00:20:11.782 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:11.782 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:11.782 "hdgst": false, 00:20:11.782 "ddgst": false 00:20:11.782 }, 00:20:11.782 "method": "bdev_nvme_attach_controller" 00:20:11.782 },{ 00:20:11.782 "params": { 00:20:11.782 "name": "Nvme8", 00:20:11.782 "trtype": "tcp", 00:20:11.782 "traddr": "10.0.0.2", 00:20:11.782 "adrfam": "ipv4", 00:20:11.782 "trsvcid": "4420", 00:20:11.782 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:11.782 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:11.782 "hdgst": false, 00:20:11.782 "ddgst": false 00:20:11.782 }, 00:20:11.782 "method": "bdev_nvme_attach_controller" 00:20:11.782 },{ 00:20:11.782 "params": { 00:20:11.782 "name": "Nvme9", 00:20:11.782 "trtype": "tcp", 00:20:11.782 "traddr": "10.0.0.2", 00:20:11.782 "adrfam": "ipv4", 00:20:11.782 "trsvcid": "4420", 00:20:11.782 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:11.782 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:11.782 "hdgst": false, 00:20:11.782 "ddgst": false 00:20:11.782 }, 00:20:11.782 "method": "bdev_nvme_attach_controller" 00:20:11.782 },{ 00:20:11.782 "params": { 00:20:11.782 "name": "Nvme10", 00:20:11.782 "trtype": "tcp", 00:20:11.782 "traddr": "10.0.0.2", 00:20:11.782 "adrfam": "ipv4", 00:20:11.782 "trsvcid": "4420", 00:20:11.782 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:11.782 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:11.782 "hdgst": false, 00:20:11.782 "ddgst": false 00:20:11.782 }, 00:20:11.782 "method": "bdev_nvme_attach_controller" 00:20:11.782 }' 00:20:11.782 [2024-11-15 11:38:51.840134] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:20:11.782 [2024-11-15 11:38:51.840229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2972244 ] 00:20:11.782 [2024-11-15 11:38:51.914621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.782 [2024-11-15 11:38:51.977641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.152 Running I/O for 1 seconds... 00:20:14.523 1805.00 IOPS, 112.81 MiB/s 00:20:14.523 Latency(us) 00:20:14.523 [2024-11-15T10:38:54.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.523 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.523 Verification LBA range: start 0x0 length 0x400 00:20:14.523 Nvme1n1 : 1.14 228.65 14.29 0.00 0.00 271154.99 21845.33 257872.02 00:20:14.523 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.523 Verification LBA range: start 0x0 length 0x400 00:20:14.523 Nvme2n1 : 1.15 222.09 13.88 0.00 0.00 280775.30 17961.72 262532.36 00:20:14.523 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.523 Verification LBA range: start 0x0 length 0x400 00:20:14.523 Nvme3n1 : 1.10 235.35 14.71 0.00 0.00 257981.44 8932.31 265639.25 00:20:14.523 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.523 Verification LBA range: start 0x0 length 0x400 00:20:14.523 Nvme4n1 : 1.11 234.74 14.67 0.00 0.00 254967.15 9709.04 225249.66 00:20:14.523 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.523 Verification LBA range: start 0x0 length 0x400 00:20:14.523 Nvme5n1 : 1.17 218.05 13.63 0.00 0.00 272432.73 21554.06 284280.60 00:20:14.523 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.523 Verification LBA range: start 0x0 length 0x400 00:20:14.523 Nvme6n1 : 1.16 220.26 13.77 0.00 0.00 264909.37 21262.79 260978.92 00:20:14.523 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.523 Verification LBA range: start 0x0 length 0x400 00:20:14.523 Nvme7n1 : 1.11 229.73 14.36 0.00 0.00 248554.76 19029.71 242337.56 00:20:14.523 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.523 Verification LBA range: start 0x0 length 0x400 00:20:14.523 Nvme8n1 : 1.16 220.09 13.76 0.00 0.00 255713.09 17087.91 259425.47 00:20:14.523 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.523 Verification LBA range: start 0x0 length 0x400 00:20:14.523 Nvme9n1 : 1.18 271.36 16.96 0.00 0.00 204710.49 17185.00 256318.58 00:20:14.523 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:14.523 Verification LBA range: start 0x0 length 0x400 00:20:14.523 Nvme10n1 : 1.17 218.73 13.67 0.00 0.00 249236.86 20000.62 284280.60 00:20:14.523 [2024-11-15T10:38:54.950Z] =================================================================================================================== 00:20:14.523 [2024-11-15T10:38:54.950Z] Total : 2299.05 143.69 0.00 0.00 254827.72 8932.31 284280.60 00:20:14.780 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:14.780 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:14.780 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:14.780 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:14.780 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:14.780 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:14.780 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:14.780 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:14.780 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:14.780 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:14.780 11:38:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:14.780 rmmod nvme_tcp 00:20:14.780 rmmod nvme_fabrics 00:20:14.780 rmmod nvme_keyring 00:20:14.780 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:14.780 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:14.780 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:14.780 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2971646 ']' 00:20:14.780 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2971646 00:20:14.780 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2971646 ']' 00:20:14.780 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2971646 00:20:14.780 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:14.780 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.780 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2971646 00:20:14.780 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:14.780 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:14.780 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2971646' 00:20:14.780 killing process with pid 2971646 00:20:14.780 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2971646 00:20:14.780 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2971646 00:20:15.399 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:15.399 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:15.399 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:15.399 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:15.399 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:15.399 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:15.399 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:15.399 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:15.399 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:15.399 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.399 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.399 11:38:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:17.305 00:20:17.305 real 0m12.023s 00:20:17.305 user 0m35.307s 00:20:17.305 sys 0m3.282s 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:17.305 ************************************ 00:20:17.305 END TEST nvmf_shutdown_tc1 00:20:17.305 ************************************ 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:17.305 ************************************ 00:20:17.305 START TEST nvmf_shutdown_tc2 00:20:17.305 ************************************ 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:17.305 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:17.305 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:17.305 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:17.306 Found net devices under 0000:09:00.0: cvl_0_0 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:17.306 Found net devices under 0000:09:00.1: cvl_0_1 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:17.306 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:17.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:20:17.564 00:20:17.564 --- 10.0.0.2 ping statistics --- 00:20:17.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.564 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:20:17.564 00:20:17.564 --- 10.0.0.1 ping statistics --- 00:20:17.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.564 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2973018 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2973018 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2973018 ']' 00:20:17.564 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.565 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.565 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.565 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.565 11:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:17.565 [2024-11-15 11:38:57.937160] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:20:17.565 [2024-11-15 11:38:57.937227] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.823 [2024-11-15 11:38:58.007952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:17.823 [2024-11-15 11:38:58.062730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.823 [2024-11-15 11:38:58.062781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.823 [2024-11-15 11:38:58.062804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.823 [2024-11-15 11:38:58.062822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.823 [2024-11-15 11:38:58.062832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.823 [2024-11-15 11:38:58.064283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.823 [2024-11-15 11:38:58.064344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.823 [2024-11-15 11:38:58.064408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:17.823 [2024-11-15 11:38:58.064411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:17.823 [2024-11-15 11:38:58.216401] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:17.823 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:18.081 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:18.081 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:18.081 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:18.081 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:18.081 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:18.081 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:18.081 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:18.081 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:18.081 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:18.081 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.081 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:18.081 Malloc1 00:20:18.081 [2024-11-15 11:38:58.319693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.081 Malloc2 00:20:18.081 Malloc3 00:20:18.081 Malloc4 00:20:18.081 Malloc5 00:20:18.339 Malloc6 00:20:18.339 Malloc7 00:20:18.339 Malloc8 00:20:18.339 Malloc9 00:20:18.339 Malloc10 00:20:18.597 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.597 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:18.597 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:18.597 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:18.597 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2973197 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2973197 /var/tmp/bdevperf.sock 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2973197 ']' 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.598 { 00:20:18.598 "params": { 00:20:18.598 "name": "Nvme$subsystem", 00:20:18.598 "trtype": "$TEST_TRANSPORT", 00:20:18.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.598 "adrfam": "ipv4", 00:20:18.598 "trsvcid": "$NVMF_PORT", 00:20:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.598 "hdgst": ${hdgst:-false}, 00:20:18.598 "ddgst": ${ddgst:-false} 00:20:18.598 }, 00:20:18.598 "method": "bdev_nvme_attach_controller" 00:20:18.598 } 00:20:18.598 EOF 00:20:18.598 )") 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.598 { 00:20:18.598 "params": { 00:20:18.598 "name": "Nvme$subsystem", 00:20:18.598 "trtype": "$TEST_TRANSPORT", 00:20:18.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.598 "adrfam": "ipv4", 00:20:18.598 "trsvcid": "$NVMF_PORT", 00:20:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.598 "hdgst": ${hdgst:-false}, 00:20:18.598 "ddgst": ${ddgst:-false} 00:20:18.598 }, 00:20:18.598 "method": "bdev_nvme_attach_controller" 00:20:18.598 } 00:20:18.598 EOF 00:20:18.598 )") 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.598 { 00:20:18.598 "params": { 00:20:18.598 "name": "Nvme$subsystem", 00:20:18.598 "trtype": "$TEST_TRANSPORT", 00:20:18.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.598 "adrfam": "ipv4", 00:20:18.598 "trsvcid": "$NVMF_PORT", 00:20:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.598 "hdgst": ${hdgst:-false}, 00:20:18.598 "ddgst": ${ddgst:-false} 00:20:18.598 }, 00:20:18.598 "method": "bdev_nvme_attach_controller" 00:20:18.598 } 00:20:18.598 EOF 00:20:18.598 )") 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.598 { 00:20:18.598 "params": { 00:20:18.598 "name": "Nvme$subsystem", 00:20:18.598 "trtype": "$TEST_TRANSPORT", 00:20:18.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.598 "adrfam": "ipv4", 00:20:18.598 "trsvcid": "$NVMF_PORT", 00:20:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.598 "hdgst": ${hdgst:-false}, 00:20:18.598 "ddgst": ${ddgst:-false} 00:20:18.598 }, 00:20:18.598 "method": "bdev_nvme_attach_controller" 00:20:18.598 } 00:20:18.598 EOF 00:20:18.598 )") 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.598 { 00:20:18.598 "params": { 00:20:18.598 "name": "Nvme$subsystem", 00:20:18.598 "trtype": "$TEST_TRANSPORT", 00:20:18.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.598 "adrfam": "ipv4", 00:20:18.598 "trsvcid": "$NVMF_PORT", 00:20:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.598 "hdgst": ${hdgst:-false}, 00:20:18.598 "ddgst": ${ddgst:-false} 00:20:18.598 }, 00:20:18.598 "method": "bdev_nvme_attach_controller" 00:20:18.598 } 00:20:18.598 EOF 00:20:18.598 )") 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.598 { 00:20:18.598 "params": { 00:20:18.598 "name": "Nvme$subsystem", 00:20:18.598 "trtype": "$TEST_TRANSPORT", 00:20:18.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.598 "adrfam": "ipv4", 00:20:18.598 "trsvcid": "$NVMF_PORT", 00:20:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.598 "hdgst": ${hdgst:-false}, 00:20:18.598 "ddgst": ${ddgst:-false} 00:20:18.598 }, 00:20:18.598 "method": "bdev_nvme_attach_controller" 00:20:18.598 } 00:20:18.598 EOF 00:20:18.598 )") 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.598 { 00:20:18.598 "params": { 00:20:18.598 "name": "Nvme$subsystem", 00:20:18.598 "trtype": "$TEST_TRANSPORT", 00:20:18.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.598 "adrfam": "ipv4", 00:20:18.598 "trsvcid": "$NVMF_PORT", 00:20:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.598 "hdgst": ${hdgst:-false}, 00:20:18.598 "ddgst": ${ddgst:-false} 00:20:18.598 }, 00:20:18.598 "method": "bdev_nvme_attach_controller" 00:20:18.598 } 00:20:18.598 EOF 00:20:18.598 )") 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.598 { 00:20:18.598 "params": { 00:20:18.598 "name": "Nvme$subsystem", 00:20:18.598 "trtype": "$TEST_TRANSPORT", 00:20:18.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.598 "adrfam": "ipv4", 00:20:18.598 "trsvcid": "$NVMF_PORT", 00:20:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.598 "hdgst": ${hdgst:-false}, 00:20:18.598 "ddgst": ${ddgst:-false} 00:20:18.598 }, 00:20:18.598 "method": "bdev_nvme_attach_controller" 00:20:18.598 } 00:20:18.598 EOF 00:20:18.598 )") 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.598 { 00:20:18.598 "params": { 00:20:18.598 "name": "Nvme$subsystem", 00:20:18.598 "trtype": "$TEST_TRANSPORT", 00:20:18.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.598 "adrfam": "ipv4", 00:20:18.598 "trsvcid": "$NVMF_PORT", 00:20:18.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.598 "hdgst": ${hdgst:-false}, 00:20:18.598 "ddgst": ${ddgst:-false} 00:20:18.598 }, 00:20:18.598 "method": "bdev_nvme_attach_controller" 00:20:18.598 } 00:20:18.598 EOF 00:20:18.598 )") 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.598 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.598 { 00:20:18.598 "params": { 00:20:18.599 "name": "Nvme$subsystem", 00:20:18.599 "trtype": "$TEST_TRANSPORT", 00:20:18.599 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.599 "adrfam": "ipv4", 00:20:18.599 "trsvcid": "$NVMF_PORT", 00:20:18.599 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.599 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.599 "hdgst": ${hdgst:-false}, 00:20:18.599 "ddgst": ${ddgst:-false} 00:20:18.599 }, 00:20:18.599 "method": "bdev_nvme_attach_controller" 00:20:18.599 } 00:20:18.599 EOF 00:20:18.599 )") 00:20:18.599 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:18.599 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:18.599 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:18.599 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:18.599 "params": { 00:20:18.599 "name": "Nvme1", 00:20:18.599 "trtype": "tcp", 00:20:18.599 "traddr": "10.0.0.2", 00:20:18.599 "adrfam": "ipv4", 00:20:18.599 "trsvcid": "4420", 00:20:18.599 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.599 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.599 "hdgst": false, 00:20:18.599 "ddgst": false 00:20:18.599 }, 00:20:18.599 "method": "bdev_nvme_attach_controller" 00:20:18.599 },{ 00:20:18.599 "params": { 00:20:18.599 "name": "Nvme2", 00:20:18.599 "trtype": "tcp", 00:20:18.599 "traddr": "10.0.0.2", 00:20:18.599 "adrfam": "ipv4", 00:20:18.599 "trsvcid": "4420", 00:20:18.599 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:18.599 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:18.599 "hdgst": false, 00:20:18.599 "ddgst": false 00:20:18.599 }, 00:20:18.599 "method": "bdev_nvme_attach_controller" 00:20:18.599 },{ 00:20:18.599 "params": { 00:20:18.599 "name": "Nvme3", 00:20:18.599 "trtype": "tcp", 00:20:18.599 "traddr": "10.0.0.2", 00:20:18.599 "adrfam": "ipv4", 00:20:18.599 "trsvcid": "4420", 00:20:18.599 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:18.599 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:18.599 "hdgst": false, 00:20:18.599 "ddgst": false 00:20:18.599 }, 00:20:18.599 "method": "bdev_nvme_attach_controller" 00:20:18.599 },{ 00:20:18.599 "params": { 00:20:18.599 "name": "Nvme4", 00:20:18.599 "trtype": "tcp", 00:20:18.599 "traddr": "10.0.0.2", 00:20:18.599 "adrfam": "ipv4", 00:20:18.599 "trsvcid": "4420", 00:20:18.599 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:18.599 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:18.599 "hdgst": false, 00:20:18.599 "ddgst": false 00:20:18.599 }, 00:20:18.599 "method": "bdev_nvme_attach_controller" 00:20:18.599 },{ 00:20:18.599 "params": { 00:20:18.599 "name": "Nvme5", 00:20:18.599 "trtype": "tcp", 00:20:18.599 "traddr": "10.0.0.2", 00:20:18.599 "adrfam": "ipv4", 00:20:18.599 "trsvcid": "4420", 00:20:18.599 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:18.599 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:18.599 "hdgst": false, 00:20:18.599 "ddgst": false 00:20:18.599 }, 00:20:18.599 "method": "bdev_nvme_attach_controller" 00:20:18.599 },{ 00:20:18.599 "params": { 00:20:18.599 "name": "Nvme6", 00:20:18.599 "trtype": "tcp", 00:20:18.599 "traddr": "10.0.0.2", 00:20:18.599 "adrfam": "ipv4", 00:20:18.599 "trsvcid": "4420", 00:20:18.599 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:18.599 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:18.599 "hdgst": false, 00:20:18.599 "ddgst": false 00:20:18.599 }, 00:20:18.599 "method": "bdev_nvme_attach_controller" 00:20:18.599 },{ 00:20:18.599 "params": { 00:20:18.599 "name": "Nvme7", 00:20:18.599 "trtype": "tcp", 00:20:18.599 "traddr": "10.0.0.2", 00:20:18.599 "adrfam": "ipv4", 00:20:18.599 "trsvcid": "4420", 00:20:18.599 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:18.599 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:18.599 "hdgst": false, 00:20:18.599 "ddgst": false 00:20:18.599 }, 00:20:18.599 "method": "bdev_nvme_attach_controller" 00:20:18.599 },{ 00:20:18.599 "params": { 00:20:18.599 "name": "Nvme8", 00:20:18.599 "trtype": "tcp", 00:20:18.599 "traddr": "10.0.0.2", 00:20:18.599 "adrfam": "ipv4", 00:20:18.599 "trsvcid": "4420", 00:20:18.599 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:18.599 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:18.599 "hdgst": false, 00:20:18.599 "ddgst": false 00:20:18.599 }, 00:20:18.599 "method": "bdev_nvme_attach_controller" 00:20:18.599 },{ 00:20:18.599 "params": { 00:20:18.599 "name": "Nvme9", 00:20:18.599 "trtype": "tcp", 00:20:18.599 "traddr": "10.0.0.2", 00:20:18.599 "adrfam": "ipv4", 00:20:18.599 "trsvcid": "4420", 00:20:18.599 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:18.599 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:18.599 "hdgst": false, 00:20:18.599 "ddgst": false 00:20:18.599 }, 00:20:18.599 "method": "bdev_nvme_attach_controller" 00:20:18.599 },{ 00:20:18.599 "params": { 00:20:18.599 "name": "Nvme10", 00:20:18.599 "trtype": "tcp", 00:20:18.599 "traddr": "10.0.0.2", 00:20:18.599 "adrfam": "ipv4", 00:20:18.599 "trsvcid": "4420", 00:20:18.599 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:18.599 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:18.599 "hdgst": false, 00:20:18.599 "ddgst": false 00:20:18.599 }, 00:20:18.599 "method": "bdev_nvme_attach_controller" 00:20:18.599 }' 00:20:18.599 [2024-11-15 11:38:58.841136] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:20:18.599 [2024-11-15 11:38:58.841225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2973197 ] 00:20:18.599 [2024-11-15 11:38:58.913580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.599 [2024-11-15 11:38:58.973799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.497 Running I/O for 10 seconds... 00:20:20.497 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.497 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:20.497 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:20.497 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.498 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:20.498 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.498 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:20.498 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:20.498 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:20.498 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:20.498 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:20.498 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:20.498 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:20.498 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:20.498 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.498 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:20.498 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:20.498 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.498 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:20.498 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:20.498 11:39:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:20.756 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:20.756 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:20.756 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:20.756 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:20.756 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.756 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:21.014 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.014 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:21.014 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:21.014 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=136 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2973197 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2973197 ']' 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2973197 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2973197 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2973197' 00:20:21.272 killing process with pid 2973197 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2973197 00:20:21.272 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2973197 00:20:21.272 Received shutdown signal, test time was about 0.960504 seconds 00:20:21.272 00:20:21.272 Latency(us) 00:20:21.272 [2024-11-15T10:39:01.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.272 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.272 Verification LBA range: start 0x0 length 0x400 00:20:21.272 Nvme1n1 : 0.96 266.76 16.67 0.00 0.00 236530.92 19320.98 256318.58 00:20:21.272 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.272 Verification LBA range: start 0x0 length 0x400 00:20:21.272 Nvme2n1 : 0.96 267.77 16.74 0.00 0.00 230483.82 18738.44 246997.90 00:20:21.272 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.272 Verification LBA range: start 0x0 length 0x400 00:20:21.272 Nvme3n1 : 0.95 274.46 17.15 0.00 0.00 220116.98 6990.51 239230.67 00:20:21.272 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.273 Verification LBA range: start 0x0 length 0x400 00:20:21.273 Nvme4n1 : 0.95 272.71 17.04 0.00 0.00 216943.20 3932.16 260978.92 00:20:21.273 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.273 Verification LBA range: start 0x0 length 0x400 00:20:21.273 Nvme5n1 : 0.93 206.83 12.93 0.00 0.00 280862.97 19709.35 259425.47 00:20:21.273 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.273 Verification LBA range: start 0x0 length 0x400 00:20:21.273 Nvme6n1 : 0.91 209.87 13.12 0.00 0.00 269786.83 34758.35 234570.33 00:20:21.273 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.273 Verification LBA range: start 0x0 length 0x400 00:20:21.273 Nvme7n1 : 0.92 208.94 13.06 0.00 0.00 265505.00 21651.15 253211.69 00:20:21.273 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.273 Verification LBA range: start 0x0 length 0x400 00:20:21.273 Nvme8n1 : 0.93 205.79 12.86 0.00 0.00 264218.55 16505.36 259425.47 00:20:21.273 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.273 Verification LBA range: start 0x0 length 0x400 00:20:21.273 Nvme9n1 : 0.95 203.04 12.69 0.00 0.00 262715.16 23398.78 282727.16 00:20:21.273 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:21.273 Verification LBA range: start 0x0 length 0x400 00:20:21.273 Nvme10n1 : 0.94 204.13 12.76 0.00 0.00 255109.25 22427.88 267192.70 00:20:21.273 [2024-11-15T10:39:01.700Z] =================================================================================================================== 00:20:21.273 [2024-11-15T10:39:01.700Z] Total : 2320.30 145.02 0.00 0.00 247261.10 3932.16 282727.16 00:20:21.531 11:39:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:22.463 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2973018 00:20:22.463 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:22.463 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:22.463 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:22.463 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:22.463 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:22.463 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:22.463 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:22.463 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:22.463 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:22.463 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:22.463 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:22.463 rmmod nvme_tcp 00:20:22.463 rmmod nvme_fabrics 00:20:22.720 rmmod nvme_keyring 00:20:22.720 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:22.720 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:22.720 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:22.720 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2973018 ']' 00:20:22.721 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2973018 00:20:22.721 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2973018 ']' 00:20:22.721 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2973018 00:20:22.721 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:22.721 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:22.721 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2973018 00:20:22.721 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:22.721 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:22.721 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2973018' 00:20:22.721 killing process with pid 2973018 00:20:22.721 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2973018 00:20:22.721 11:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2973018 00:20:23.287 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:23.287 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:23.287 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:23.287 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:23.287 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:23.287 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:23.287 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:23.287 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:23.287 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:23.287 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.287 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.287 11:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:25.191 00:20:25.191 real 0m7.797s 00:20:25.191 user 0m24.085s 00:20:25.191 sys 0m1.476s 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:25.191 ************************************ 00:20:25.191 END TEST nvmf_shutdown_tc2 00:20:25.191 ************************************ 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:25.191 ************************************ 00:20:25.191 START TEST nvmf_shutdown_tc3 00:20:25.191 ************************************ 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.191 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:25.192 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:25.192 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:25.192 Found net devices under 0000:09:00.0: cvl_0_0 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:25.192 Found net devices under 0000:09:00.1: cvl_0_1 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:25.192 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:25.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:20:25.452 00:20:25.452 --- 10.0.0.2 ping statistics --- 00:20:25.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.452 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:25.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:20:25.452 00:20:25.452 --- 10.0.0.1 ping statistics --- 00:20:25.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.452 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2974225 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2974225 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2974225 ']' 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.452 11:39:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.452 [2024-11-15 11:39:05.794998] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:20:25.452 [2024-11-15 11:39:05.795075] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.452 [2024-11-15 11:39:05.866198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:25.711 [2024-11-15 11:39:05.925224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.711 [2024-11-15 11:39:05.925276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.711 [2024-11-15 11:39:05.925298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.711 [2024-11-15 11:39:05.925332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.711 [2024-11-15 11:39:05.925360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.711 [2024-11-15 11:39:05.930324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.711 [2024-11-15 11:39:05.930395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:25.711 [2024-11-15 11:39:05.930459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:25.711 [2024-11-15 11:39:05.930463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.711 [2024-11-15 11:39:06.070006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.711 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:25.711 Malloc1 00:20:25.969 [2024-11-15 11:39:06.154516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.969 Malloc2 00:20:25.969 Malloc3 00:20:25.969 Malloc4 00:20:25.969 Malloc5 00:20:25.969 Malloc6 00:20:26.227 Malloc7 00:20:26.227 Malloc8 00:20:26.227 Malloc9 00:20:26.227 Malloc10 00:20:26.227 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.227 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:26.227 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:26.227 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:26.227 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2974402 00:20:26.227 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2974402 /var/tmp/bdevperf.sock 00:20:26.227 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2974402 ']' 00:20:26.227 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:26.227 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.227 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:26.227 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.227 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.228 { 00:20:26.228 "params": { 00:20:26.228 "name": "Nvme$subsystem", 00:20:26.228 "trtype": "$TEST_TRANSPORT", 00:20:26.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.228 "adrfam": "ipv4", 00:20:26.228 "trsvcid": "$NVMF_PORT", 00:20:26.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.228 "hdgst": ${hdgst:-false}, 00:20:26.228 "ddgst": ${ddgst:-false} 00:20:26.228 }, 00:20:26.228 "method": "bdev_nvme_attach_controller" 00:20:26.228 } 00:20:26.228 EOF 00:20:26.228 )") 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.228 { 00:20:26.228 "params": { 00:20:26.228 "name": "Nvme$subsystem", 00:20:26.228 "trtype": "$TEST_TRANSPORT", 00:20:26.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.228 "adrfam": "ipv4", 00:20:26.228 "trsvcid": "$NVMF_PORT", 00:20:26.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.228 "hdgst": ${hdgst:-false}, 00:20:26.228 "ddgst": ${ddgst:-false} 00:20:26.228 }, 00:20:26.228 "method": "bdev_nvme_attach_controller" 00:20:26.228 } 00:20:26.228 EOF 00:20:26.228 )") 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.228 { 00:20:26.228 "params": { 00:20:26.228 "name": "Nvme$subsystem", 00:20:26.228 "trtype": "$TEST_TRANSPORT", 00:20:26.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.228 "adrfam": "ipv4", 00:20:26.228 "trsvcid": "$NVMF_PORT", 00:20:26.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.228 "hdgst": ${hdgst:-false}, 00:20:26.228 "ddgst": ${ddgst:-false} 00:20:26.228 }, 00:20:26.228 "method": "bdev_nvme_attach_controller" 00:20:26.228 } 00:20:26.228 EOF 00:20:26.228 )") 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.228 { 00:20:26.228 "params": { 00:20:26.228 "name": "Nvme$subsystem", 00:20:26.228 "trtype": "$TEST_TRANSPORT", 00:20:26.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.228 "adrfam": "ipv4", 00:20:26.228 "trsvcid": "$NVMF_PORT", 00:20:26.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.228 "hdgst": ${hdgst:-false}, 00:20:26.228 "ddgst": ${ddgst:-false} 00:20:26.228 }, 00:20:26.228 "method": "bdev_nvme_attach_controller" 00:20:26.228 } 00:20:26.228 EOF 00:20:26.228 )") 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.228 { 00:20:26.228 "params": { 00:20:26.228 "name": "Nvme$subsystem", 00:20:26.228 "trtype": "$TEST_TRANSPORT", 00:20:26.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.228 "adrfam": "ipv4", 00:20:26.228 "trsvcid": "$NVMF_PORT", 00:20:26.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.228 "hdgst": ${hdgst:-false}, 00:20:26.228 "ddgst": ${ddgst:-false} 00:20:26.228 }, 00:20:26.228 "method": "bdev_nvme_attach_controller" 00:20:26.228 } 00:20:26.228 EOF 00:20:26.228 )") 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.228 { 00:20:26.228 "params": { 00:20:26.228 "name": "Nvme$subsystem", 00:20:26.228 "trtype": "$TEST_TRANSPORT", 00:20:26.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.228 "adrfam": "ipv4", 00:20:26.228 "trsvcid": "$NVMF_PORT", 00:20:26.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.228 "hdgst": ${hdgst:-false}, 00:20:26.228 "ddgst": ${ddgst:-false} 00:20:26.228 }, 00:20:26.228 "method": "bdev_nvme_attach_controller" 00:20:26.228 } 00:20:26.228 EOF 00:20:26.228 )") 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.228 { 00:20:26.228 "params": { 00:20:26.228 "name": "Nvme$subsystem", 00:20:26.228 "trtype": "$TEST_TRANSPORT", 00:20:26.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.228 "adrfam": "ipv4", 00:20:26.228 "trsvcid": "$NVMF_PORT", 00:20:26.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.228 "hdgst": ${hdgst:-false}, 00:20:26.228 "ddgst": ${ddgst:-false} 00:20:26.228 }, 00:20:26.228 "method": "bdev_nvme_attach_controller" 00:20:26.228 } 00:20:26.228 EOF 00:20:26.228 )") 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.228 { 00:20:26.228 "params": { 00:20:26.228 "name": "Nvme$subsystem", 00:20:26.228 "trtype": "$TEST_TRANSPORT", 00:20:26.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.228 "adrfam": "ipv4", 00:20:26.228 "trsvcid": "$NVMF_PORT", 00:20:26.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.228 "hdgst": ${hdgst:-false}, 00:20:26.228 "ddgst": ${ddgst:-false} 00:20:26.228 }, 00:20:26.228 "method": "bdev_nvme_attach_controller" 00:20:26.228 } 00:20:26.228 EOF 00:20:26.228 )") 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.228 { 00:20:26.228 "params": { 00:20:26.228 "name": "Nvme$subsystem", 00:20:26.228 "trtype": "$TEST_TRANSPORT", 00:20:26.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.228 "adrfam": "ipv4", 00:20:26.228 "trsvcid": "$NVMF_PORT", 00:20:26.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.228 "hdgst": ${hdgst:-false}, 00:20:26.228 "ddgst": ${ddgst:-false} 00:20:26.228 }, 00:20:26.228 "method": "bdev_nvme_attach_controller" 00:20:26.228 } 00:20:26.228 EOF 00:20:26.228 )") 00:20:26.228 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:26.487 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.487 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.487 { 00:20:26.487 "params": { 00:20:26.487 "name": "Nvme$subsystem", 00:20:26.487 "trtype": "$TEST_TRANSPORT", 00:20:26.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.487 "adrfam": "ipv4", 00:20:26.487 "trsvcid": "$NVMF_PORT", 00:20:26.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.487 "hdgst": ${hdgst:-false}, 00:20:26.487 "ddgst": ${ddgst:-false} 00:20:26.487 }, 00:20:26.488 "method": "bdev_nvme_attach_controller" 00:20:26.488 } 00:20:26.488 EOF 00:20:26.488 )") 00:20:26.488 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:26.488 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:26.488 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:26.488 11:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:26.488 "params": { 00:20:26.488 "name": "Nvme1", 00:20:26.488 "trtype": "tcp", 00:20:26.488 "traddr": "10.0.0.2", 00:20:26.488 "adrfam": "ipv4", 00:20:26.488 "trsvcid": "4420", 00:20:26.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:26.488 "hdgst": false, 00:20:26.488 "ddgst": false 00:20:26.488 }, 00:20:26.488 "method": "bdev_nvme_attach_controller" 00:20:26.488 },{ 00:20:26.488 "params": { 00:20:26.488 "name": "Nvme2", 00:20:26.488 "trtype": "tcp", 00:20:26.488 "traddr": "10.0.0.2", 00:20:26.488 "adrfam": "ipv4", 00:20:26.488 "trsvcid": "4420", 00:20:26.488 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:26.488 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:26.488 "hdgst": false, 00:20:26.488 "ddgst": false 00:20:26.488 }, 00:20:26.488 "method": "bdev_nvme_attach_controller" 00:20:26.488 },{ 00:20:26.488 "params": { 00:20:26.488 "name": "Nvme3", 00:20:26.488 "trtype": "tcp", 00:20:26.488 "traddr": "10.0.0.2", 00:20:26.488 "adrfam": "ipv4", 00:20:26.488 "trsvcid": "4420", 00:20:26.488 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:26.488 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:26.488 "hdgst": false, 00:20:26.488 "ddgst": false 00:20:26.488 }, 00:20:26.488 "method": "bdev_nvme_attach_controller" 00:20:26.488 },{ 00:20:26.488 "params": { 00:20:26.488 "name": "Nvme4", 00:20:26.488 "trtype": "tcp", 00:20:26.488 "traddr": "10.0.0.2", 00:20:26.488 "adrfam": "ipv4", 00:20:26.488 "trsvcid": "4420", 00:20:26.488 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:26.488 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:26.488 "hdgst": false, 00:20:26.488 "ddgst": false 00:20:26.488 }, 00:20:26.488 "method": "bdev_nvme_attach_controller" 00:20:26.488 },{ 00:20:26.488 "params": { 00:20:26.488 "name": "Nvme5", 00:20:26.488 "trtype": "tcp", 00:20:26.488 "traddr": "10.0.0.2", 00:20:26.488 "adrfam": "ipv4", 00:20:26.488 "trsvcid": "4420", 00:20:26.488 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:26.488 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:26.488 "hdgst": false, 00:20:26.488 "ddgst": false 00:20:26.488 }, 00:20:26.488 "method": "bdev_nvme_attach_controller" 00:20:26.488 },{ 00:20:26.488 "params": { 00:20:26.488 "name": "Nvme6", 00:20:26.488 "trtype": "tcp", 00:20:26.488 "traddr": "10.0.0.2", 00:20:26.488 "adrfam": "ipv4", 00:20:26.488 "trsvcid": "4420", 00:20:26.488 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:26.488 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:26.488 "hdgst": false, 00:20:26.488 "ddgst": false 00:20:26.488 }, 00:20:26.488 "method": "bdev_nvme_attach_controller" 00:20:26.488 },{ 00:20:26.488 "params": { 00:20:26.488 "name": "Nvme7", 00:20:26.488 "trtype": "tcp", 00:20:26.488 "traddr": "10.0.0.2", 00:20:26.488 "adrfam": "ipv4", 00:20:26.488 "trsvcid": "4420", 00:20:26.488 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:26.488 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:26.488 "hdgst": false, 00:20:26.488 "ddgst": false 00:20:26.488 }, 00:20:26.488 "method": "bdev_nvme_attach_controller" 00:20:26.488 },{ 00:20:26.488 "params": { 00:20:26.488 "name": "Nvme8", 00:20:26.488 "trtype": "tcp", 00:20:26.488 "traddr": "10.0.0.2", 00:20:26.488 "adrfam": "ipv4", 00:20:26.488 "trsvcid": "4420", 00:20:26.488 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:26.488 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:26.488 "hdgst": false, 00:20:26.488 "ddgst": false 00:20:26.488 }, 00:20:26.488 "method": "bdev_nvme_attach_controller" 00:20:26.488 },{ 00:20:26.488 "params": { 00:20:26.488 "name": "Nvme9", 00:20:26.488 "trtype": "tcp", 00:20:26.488 "traddr": "10.0.0.2", 00:20:26.488 "adrfam": "ipv4", 00:20:26.488 "trsvcid": "4420", 00:20:26.488 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:26.488 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:26.488 "hdgst": false, 00:20:26.488 "ddgst": false 00:20:26.488 }, 00:20:26.488 "method": "bdev_nvme_attach_controller" 00:20:26.488 },{ 00:20:26.488 "params": { 00:20:26.488 "name": "Nvme10", 00:20:26.488 "trtype": "tcp", 00:20:26.488 "traddr": "10.0.0.2", 00:20:26.488 "adrfam": "ipv4", 00:20:26.488 "trsvcid": "4420", 00:20:26.488 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:26.488 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:26.488 "hdgst": false, 00:20:26.488 "ddgst": false 00:20:26.488 }, 00:20:26.488 "method": "bdev_nvme_attach_controller" 00:20:26.488 }' 00:20:26.488 [2024-11-15 11:39:06.671088] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:20:26.488 [2024-11-15 11:39:06.671180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2974402 ] 00:20:26.488 [2024-11-15 11:39:06.742900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.488 [2024-11-15 11:39:06.803454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.386 Running I/O for 10 seconds... 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:28.386 11:39:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:28.644 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:28.644 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:28.644 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:28.644 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.644 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:28.644 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:28.644 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.903 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:28.903 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:28.903 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:28.903 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:28.903 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:29.182 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:29.182 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:29.182 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.182 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:29.182 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.183 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=136 00:20:29.183 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:20:29.183 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:29.183 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:29.183 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:29.183 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2974225 00:20:29.183 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2974225 ']' 00:20:29.183 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2974225 00:20:29.183 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:20:29.183 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.183 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2974225 00:20:29.183 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:29.183 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:29.183 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2974225' 00:20:29.183 killing process with pid 2974225 00:20:29.183 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2974225 00:20:29.183 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2974225 00:20:29.183 [2024-11-15 11:39:09.401724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb61b0 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.401806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb61b0 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.401834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb61b0 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.401846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb61b0 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.401859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb61b0 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.401871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb61b0 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.401883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb61b0 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.401895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb61b0 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.401907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb61b0 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.401919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb61b0 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.401930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb61b0 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.401942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb61b0 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.401954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb61b0 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.401966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb61b0 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.401977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb61b0 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.401989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb61b0 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.402001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb61b0 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.183 [2024-11-15 11:39:09.403959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.403971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.403983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.403995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.404011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.404023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.404035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.404047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.404060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.404071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.404083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ed90 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.405995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.406277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6680 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.407496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.184 [2024-11-15 11:39:09.407538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.184 [2024-11-15 11:39:09.407557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.184 [2024-11-15 11:39:09.407572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.184 [2024-11-15 11:39:09.407599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.184 [2024-11-15 11:39:09.407613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.184 [2024-11-15 11:39:09.407628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.184 [2024-11-15 11:39:09.407654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.184 [2024-11-15 11:39:09.407667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2858060 is same with the state(6) to be set 00:20:29.184 [2024-11-15 11:39:09.407760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.184 [2024-11-15 11:39:09.407783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.407804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.185 [2024-11-15 11:39:09.407818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.407832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.185 [2024-11-15 11:39:09.407845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.407859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.185 [2024-11-15 11:39:09.407872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.407885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240c6f0 is same with the state(6) to be set 00:20:29.185 [2024-11-15 11:39:09.407930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.185 [2024-11-15 11:39:09.407950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.407966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.185 [2024-11-15 11:39:09.407980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.407995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.185 [2024-11-15 11:39:09.408008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.408022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.185 [2024-11-15 11:39:09.408035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.408050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2403220 is same with the state(6) to be set 00:20:29.185 [2024-11-15 11:39:09.408453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.408479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.408503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.408519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.408536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.408550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.408565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.408580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.408605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.408632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.408650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.408664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.408680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.408694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.408710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.408725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.408740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.408754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.408770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.408784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.408800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.408814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.408830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.408844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.408859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.408900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.408916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.408930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.408945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.408959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.408973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.408987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.409003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.409016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.409035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.409064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.409081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.409095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.409111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.409125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.409141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.409155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.409171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.409185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.409201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.409215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.409231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.409245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.409261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.409276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.409310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.409327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.409343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.409357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.409373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.409387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.409403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.409417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.409433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.185 [2024-11-15 11:39:09.409451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.185 [2024-11-15 11:39:09.409452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.185 [2024-11-15 11:39:09.409468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 [2024-11-15 11:39:09.409483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 [2024-11-15 11:39:09.409483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with t[2024-11-15 11:39:09.409498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:1he state(6) to be set 00:20:29.186 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 [2024-11-15 11:39:09.409514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with t[2024-11-15 11:39:09.409516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:29.186 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 [2024-11-15 11:39:09.409528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 [2024-11-15 11:39:09.409541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 [2024-11-15 11:39:09.409554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 [2024-11-15 11:39:09.409566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 [2024-11-15 11:39:09.409579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with t[2024-11-15 11:39:09.409602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1he state(6) to be set 00:20:29.186 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 [2024-11-15 11:39:09.409617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 [2024-11-15 11:39:09.409630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 [2024-11-15 11:39:09.409642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 [2024-11-15 11:39:09.409655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with t[2024-11-15 11:39:09.409668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1he state(6) to be set 00:20:29.186 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 [2024-11-15 11:39:09.409682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 [2024-11-15 11:39:09.409695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 [2024-11-15 11:39:09.409708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 [2024-11-15 11:39:09.409720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:1[2024-11-15 11:39:09.409749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 he state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with t[2024-11-15 11:39:09.409762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:29.186 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 [2024-11-15 11:39:09.409776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 [2024-11-15 11:39:09.409789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 [2024-11-15 11:39:09.409801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1[2024-11-15 11:39:09.409812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 he state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 11:39:09.409826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 he state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 [2024-11-15 11:39:09.409852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 [2024-11-15 11:39:09.409864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128[2024-11-15 11:39:09.409877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 he state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 11:39:09.409890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 he state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 [2024-11-15 11:39:09.409917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 [2024-11-15 11:39:09.409930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 [2024-11-15 11:39:09.409942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 [2024-11-15 11:39:09.409954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128[2024-11-15 11:39:09.409967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 he state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 11:39:09.409981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 he state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.409998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 [2024-11-15 11:39:09.410008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.410019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 11:39:09.410020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 he state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.410034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.410037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 [2024-11-15 11:39:09.410046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with t[2024-11-15 11:39:09.410051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:29.186 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.186 [2024-11-15 11:39:09.410065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.410068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.186 [2024-11-15 11:39:09.410077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.186 [2024-11-15 11:39:09.410082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.187 [2024-11-15 11:39:09.410089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.410098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.187 [2024-11-15 11:39:09.410101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.410111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.187 [2024-11-15 11:39:09.410114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.410125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with t[2024-11-15 11:39:09.410126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128he state(6) to be set 00:20:29.187 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.187 [2024-11-15 11:39:09.410139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.410141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.187 [2024-11-15 11:39:09.410151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.410156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.187 [2024-11-15 11:39:09.410163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.410170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.187 [2024-11-15 11:39:09.410175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.410186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:12[2024-11-15 11:39:09.410187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.187 he state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.410200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 11:39:09.410200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.187 he state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.410215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.410217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.187 [2024-11-15 11:39:09.410226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6b50 is same with t[2024-11-15 11:39:09.410230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:29.187 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.187 [2024-11-15 11:39:09.410247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.187 [2024-11-15 11:39:09.410261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.187 [2024-11-15 11:39:09.410275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.187 [2024-11-15 11:39:09.410298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.187 [2024-11-15 11:39:09.410337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.187 [2024-11-15 11:39:09.410352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.187 [2024-11-15 11:39:09.410368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.187 [2024-11-15 11:39:09.410381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.187 [2024-11-15 11:39:09.410396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.187 [2024-11-15 11:39:09.410410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.187 [2024-11-15 11:39:09.410426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.187 [2024-11-15 11:39:09.410440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.187 [2024-11-15 11:39:09.410455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.187 [2024-11-15 11:39:09.410468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.187 [2024-11-15 11:39:09.410483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.187 [2024-11-15 11:39:09.410497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.187 [2024-11-15 11:39:09.410512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.187 [2024-11-15 11:39:09.410528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.187 [2024-11-15 11:39:09.410543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.187 [2024-11-15 11:39:09.410558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.187 [2024-11-15 11:39:09.410631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:29.187 [2024-11-15 11:39:09.411573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.187 [2024-11-15 11:39:09.411907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.411919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.411932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.411944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.411956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.411969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.411981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.411997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.412405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7040 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.188 [2024-11-15 11:39:09.413929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 11:39:09.413942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.188 he state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.188 [2024-11-15 11:39:09.413969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.188 [2024-11-15 11:39:09.413982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with t[2024-11-15 11:39:09.413982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:29.188 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.413996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.189 [2024-11-15 11:39:09.414001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128[2024-11-15 11:39:09.414009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 he state(6) to be set 00:20:29.189 [2024-11-15 11:39:09.414022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.189 [2024-11-15 11:39:09.414024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.189 [2024-11-15 11:39:09.414042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.189 [2024-11-15 11:39:09.414057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.189 [2024-11-15 11:39:09.414073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.189 [2024-11-15 11:39:09.414074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.189 [2024-11-15 11:39:09.414090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.189 [2024-11-15 11:39:09.414106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.189 [2024-11-15 11:39:09.414121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.189 [2024-11-15 11:39:09.414137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with t[2024-11-15 11:39:09.414137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128he state(6) to be set 00:20:29.189 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.189 [2024-11-15 11:39:09.414155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.189 [2024-11-15 11:39:09.414171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7510 is same with the state(6) to be set 00:20:29.189 [2024-11-15 11:39:09.414185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.414975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.414989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.189 [2024-11-15 11:39:09.415011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.189 [2024-11-15 11:39:09.415026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with t[2024-11-15 11:39:09.415374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:12he state(6) to be set 00:20:29.190 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with t[2024-11-15 11:39:09.415477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:12he state(6) to be set 00:20:29.190 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:12[2024-11-15 11:39:09.415575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 he state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 11:39:09.415595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 he state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:12[2024-11-15 11:39:09.415689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 he state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 11:39:09.415703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 he state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.190 [2024-11-15 11:39:09.415772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.190 [2024-11-15 11:39:09.415785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.190 [2024-11-15 11:39:09.415792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.191 [2024-11-15 11:39:09.415797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.415811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.415814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.191 [2024-11-15 11:39:09.415823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.415828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.191 [2024-11-15 11:39:09.415837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.415844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.191 [2024-11-15 11:39:09.415849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.415857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.191 [2024-11-15 11:39:09.415862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.415874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with t[2024-11-15 11:39:09.415875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:12he state(6) to be set 00:20:29.191 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.191 [2024-11-15 11:39:09.415888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.415891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.191 [2024-11-15 11:39:09.415901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.415906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.191 [2024-11-15 11:39:09.415913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.415921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.191 [2024-11-15 11:39:09.415925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.415938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.415941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.191 [2024-11-15 11:39:09.415953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.415956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.191 [2024-11-15 11:39:09.415967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.415972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.191 [2024-11-15 11:39:09.415979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.415986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.191 [2024-11-15 11:39:09.415992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.416004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.416016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.416028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.416039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.416051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.416063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.416076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.416088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.416099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.416111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.416138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb79e0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:29.191 [2024-11-15 11:39:09.417262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240c6f0 (9): Bad file descriptor 00:20:29.191 [2024-11-15 11:39:09.417728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.417994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.418007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.418019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.418032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.418044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.418056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.418069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.418081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.418093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.418106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.418118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.418130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.418142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.191 [2024-11-15 11:39:09.418155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.418643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7eb0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.419680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:29.192 [2024-11-15 11:39:09.419744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240a0b0 (9): Bad file descriptor 00:20:29.192 [2024-11-15 11:39:09.419802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2858060 (9): Bad file descriptor 00:20:29.192 [2024-11-15 11:39:09.419871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.192 [2024-11-15 11:39:09.419894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.192 [2024-11-15 11:39:09.419910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.192 [2024-11-15 11:39:09.419924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.192 [2024-11-15 11:39:09.419938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.192 [2024-11-15 11:39:09.419951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.192 [2024-11-15 11:39:09.419965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.192 [2024-11-15 11:39:09.419979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.192 [2024-11-15 11:39:09.419991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2837f70 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.420022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2403220 (9): Bad file descriptor 00:20:29.192 [2024-11-15 11:39:09.420071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.192 [2024-11-15 11:39:09.420092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.192 [2024-11-15 11:39:09.420108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.192 [2024-11-15 11:39:09.420122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.192 [2024-11-15 11:39:09.420136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.192 [2024-11-15 11:39:09.420149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.192 [2024-11-15 11:39:09.420164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.192 [2024-11-15 11:39:09.420178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.192 [2024-11-15 11:39:09.420191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28371f0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.420238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.192 [2024-11-15 11:39:09.420264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.192 [2024-11-15 11:39:09.420279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.192 [2024-11-15 11:39:09.420298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.192 [2024-11-15 11:39:09.420326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.192 [2024-11-15 11:39:09.420341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.192 [2024-11-15 11:39:09.420355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.192 [2024-11-15 11:39:09.420369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.192 [2024-11-15 11:39:09.420382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2830090 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.420427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.192 [2024-11-15 11:39:09.420448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.192 [2024-11-15 11:39:09.420463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.192 [2024-11-15 11:39:09.420477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.192 [2024-11-15 11:39:09.420491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.192 [2024-11-15 11:39:09.420505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.192 [2024-11-15 11:39:09.420519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.192 [2024-11-15 11:39:09.420533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.192 [2024-11-15 11:39:09.420546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2374110 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.420740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.420766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.420780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.420792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.420805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.420817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.420830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.420842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.420854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.420867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.420885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.192 [2024-11-15 11:39:09.420899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.420911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.420923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.420936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.420948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.420961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.420974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.420987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1[2024-11-15 11:39:09.421244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.193 he state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.193 [2024-11-15 11:39:09.421272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with t[2024-11-15 11:39:09.421284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:1he state(6) to be set 00:20:29.193 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.193 [2024-11-15 11:39:09.421300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.193 [2024-11-15 11:39:09.421323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.193 [2024-11-15 11:39:09.421337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.193 [2024-11-15 11:39:09.421350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1[2024-11-15 11:39:09.421363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.193 he state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 11:39:09.421378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.193 he state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.193 [2024-11-15 11:39:09.421407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.193 [2024-11-15 11:39:09.421420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.193 [2024-11-15 11:39:09.421433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.193 [2024-11-15 11:39:09.421450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.193 [2024-11-15 11:39:09.421464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.193 [2024-11-15 11:39:09.421477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.193 [2024-11-15 11:39:09.421490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.193 [2024-11-15 11:39:09.421504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with t[2024-11-15 11:39:09.421518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:1he state(6) to be set 00:20:29.193 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.193 [2024-11-15 11:39:09.421532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.193 [2024-11-15 11:39:09.421545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.193 [2024-11-15 11:39:09.421558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.193 [2024-11-15 11:39:09.421571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.193 [2024-11-15 11:39:09.421583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb472b0 is same with the state(6) to be set 00:20:29.193 [2024-11-15 11:39:09.421607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.193 [2024-11-15 11:39:09.421623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.193 [2024-11-15 11:39:09.421637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.193 [2024-11-15 11:39:09.421653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.193 [2024-11-15 11:39:09.421667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.193 [2024-11-15 11:39:09.421688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.193 [2024-11-15 11:39:09.421703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.193 [2024-11-15 11:39:09.421719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.193 [2024-11-15 11:39:09.421733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.193 [2024-11-15 11:39:09.421749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.193 [2024-11-15 11:39:09.421763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.421778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.421792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.421807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.421821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.421836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.421850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.421866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.421880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.421895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.421909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.421925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.421939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.421954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.421968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.421983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.421998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.422978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.422992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.194 [2024-11-15 11:39:09.423006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.194 [2024-11-15 11:39:09.423020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.423035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.423054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.423070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.423083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.423098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.423111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.423126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.423139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.423153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.423167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.423181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.423195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.423209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.423222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.423241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.423255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.423480] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:29.195 [2024-11-15 11:39:09.423634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.195 [2024-11-15 11:39:09.423663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240c6f0 with addr=10.0.0.2, port=4420 00:20:29.195 [2024-11-15 11:39:09.423680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240c6f0 is same with the state(6) to be set 00:20:29.195 [2024-11-15 11:39:09.423782] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:29.195 [2024-11-15 11:39:09.425524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:29.195 [2024-11-15 11:39:09.425585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2830e90 (9): Bad file descriptor 00:20:29.195 [2024-11-15 11:39:09.425699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.195 [2024-11-15 11:39:09.425726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240a0b0 with addr=10.0.0.2, port=4420 00:20:29.195 [2024-11-15 11:39:09.425742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240a0b0 is same with the state(6) to be set 00:20:29.195 [2024-11-15 11:39:09.425761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240c6f0 (9): Bad file descriptor 00:20:29.195 [2024-11-15 11:39:09.425865] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:29.195 [2024-11-15 11:39:09.426119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240a0b0 (9): Bad file descriptor 00:20:29.195 [2024-11-15 11:39:09.426146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:29.195 [2024-11-15 11:39:09.426160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:29.195 [2024-11-15 11:39:09.426175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:29.195 [2024-11-15 11:39:09.426191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:29.195 [2024-11-15 11:39:09.426272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.426983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.426998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.427014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.427028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.427044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.427058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.427074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.195 [2024-11-15 11:39:09.427088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.195 [2024-11-15 11:39:09.427104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.427982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.427997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.428014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.428028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.428044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.428058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.428078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.428093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.428109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.428123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.428139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.428153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.428169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.196 [2024-11-15 11:39:09.428183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.196 [2024-11-15 11:39:09.428199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.428213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.428229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.428242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.428258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.428272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.428308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.428326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.428342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.428357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.428378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x280da80 is same with the state(6) to be set 00:20:29.197 [2024-11-15 11:39:09.428510] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:29.197 [2024-11-15 11:39:09.429040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.197 [2024-11-15 11:39:09.429069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2830e90 with addr=10.0.0.2, port=4420 00:20:29.197 [2024-11-15 11:39:09.429086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2830e90 is same with the state(6) to be set 00:20:29.197 [2024-11-15 11:39:09.429102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:29.197 [2024-11-15 11:39:09.429116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:29.197 [2024-11-15 11:39:09.429130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:29.197 [2024-11-15 11:39:09.429149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:29.197 [2024-11-15 11:39:09.430411] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:29.197 [2024-11-15 11:39:09.430505] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:29.197 [2024-11-15 11:39:09.430542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:29.197 [2024-11-15 11:39:09.430573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2830090 (9): Bad file descriptor 00:20:29.197 [2024-11-15 11:39:09.430607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2830e90 (9): Bad file descriptor 00:20:29.197 [2024-11-15 11:39:09.430652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.197 [2024-11-15 11:39:09.430673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.430689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.197 [2024-11-15 11:39:09.430703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.430717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.197 [2024-11-15 11:39:09.430731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.430744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.197 [2024-11-15 11:39:09.430758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.430770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28311c0 is same with the state(6) to be set 00:20:29.197 [2024-11-15 11:39:09.430812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2837f70 (9): Bad file descriptor 00:20:29.197 [2024-11-15 11:39:09.430851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28371f0 (9): Bad file descriptor 00:20:29.197 [2024-11-15 11:39:09.430889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2374110 (9): Bad file descriptor 00:20:29.197 [2024-11-15 11:39:09.431055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:29.197 [2024-11-15 11:39:09.431077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:29.197 [2024-11-15 11:39:09.431092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:29.197 [2024-11-15 11:39:09.431106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:29.197 [2024-11-15 11:39:09.431159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.197 [2024-11-15 11:39:09.431856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.197 [2024-11-15 11:39:09.431870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.431886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.431900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.431916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.431930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.431946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.431961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.431977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.431991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.432983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.432999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.433013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.433029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.433043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.433058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.433072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.433088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.433102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.198 [2024-11-15 11:39:09.433118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.198 [2024-11-15 11:39:09.433132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.433148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.433171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.433187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.433201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.433216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26101a0 is same with the state(6) to be set 00:20:29.199 [2024-11-15 11:39:09.434798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.434824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.434851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.434868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.434884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.434898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.434914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.434928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.434944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.434957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.434973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.434986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.199 [2024-11-15 11:39:09.435921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.199 [2024-11-15 11:39:09.435937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.435951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.435967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.435981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.435997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.200 [2024-11-15 11:39:09.436821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.200 [2024-11-15 11:39:09.436837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x264ca90 is same with the state(6) to be set 00:20:29.200 [2024-11-15 11:39:09.438071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:29.200 [2024-11-15 11:39:09.438103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:29.200 [2024-11-15 11:39:09.438122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:29.200 [2024-11-15 11:39:09.438344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.200 [2024-11-15 11:39:09.438374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2830090 with addr=10.0.0.2, port=4420 00:20:29.200 [2024-11-15 11:39:09.438392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2830090 is same with the state(6) to be set 00:20:29.200 [2024-11-15 11:39:09.438630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.200 [2024-11-15 11:39:09.438659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240c6f0 with addr=10.0.0.2, port=4420 00:20:29.200 [2024-11-15 11:39:09.438676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240c6f0 is same with the state(6) to be set 00:20:29.200 [2024-11-15 11:39:09.438771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.200 [2024-11-15 11:39:09.438796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2403220 with addr=10.0.0.2, port=4420 00:20:29.200 [2024-11-15 11:39:09.438813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2403220 is same with the state(6) to be set 00:20:29.200 [2024-11-15 11:39:09.438898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.200 [2024-11-15 11:39:09.438923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2858060 with addr=10.0.0.2, port=4420 00:20:29.201 [2024-11-15 11:39:09.438939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2858060 is same with the state(6) to be set 00:20:29.201 [2024-11-15 11:39:09.438961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2830090 (9): Bad file descriptor 00:20:29.201 [2024-11-15 11:39:09.439569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:29.201 [2024-11-15 11:39:09.439609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:29.201 [2024-11-15 11:39:09.439645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240c6f0 (9): Bad file descriptor 00:20:29.201 [2024-11-15 11:39:09.439676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2403220 (9): Bad file descriptor 00:20:29.201 [2024-11-15 11:39:09.439695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2858060 (9): Bad file descriptor 00:20:29.201 [2024-11-15 11:39:09.439711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:29.201 [2024-11-15 11:39:09.439725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:29.201 [2024-11-15 11:39:09.439740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:29.201 [2024-11-15 11:39:09.439755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:29.201 [2024-11-15 11:39:09.439895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.201 [2024-11-15 11:39:09.439921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240a0b0 with addr=10.0.0.2, port=4420 00:20:29.201 [2024-11-15 11:39:09.439943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240a0b0 is same with the state(6) to be set 00:20:29.201 [2024-11-15 11:39:09.440034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.201 [2024-11-15 11:39:09.440059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2830e90 with addr=10.0.0.2, port=4420 00:20:29.201 [2024-11-15 11:39:09.440075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2830e90 is same with the state(6) to be set 00:20:29.201 [2024-11-15 11:39:09.440091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:29.201 [2024-11-15 11:39:09.440104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:29.201 [2024-11-15 11:39:09.440117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:29.201 [2024-11-15 11:39:09.440132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:29.201 [2024-11-15 11:39:09.440147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:29.201 [2024-11-15 11:39:09.440160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:29.201 [2024-11-15 11:39:09.440173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:29.201 [2024-11-15 11:39:09.440185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:29.201 [2024-11-15 11:39:09.440199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:29.201 [2024-11-15 11:39:09.440211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:29.201 [2024-11-15 11:39:09.440224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:29.201 [2024-11-15 11:39:09.440237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:29.201 [2024-11-15 11:39:09.440315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240a0b0 (9): Bad file descriptor 00:20:29.201 [2024-11-15 11:39:09.440341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2830e90 (9): Bad file descriptor 00:20:29.201 [2024-11-15 11:39:09.440399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:29.201 [2024-11-15 11:39:09.440418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:29.201 [2024-11-15 11:39:09.440432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:29.201 [2024-11-15 11:39:09.440445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:29.201 [2024-11-15 11:39:09.440460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:29.201 [2024-11-15 11:39:09.440473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:29.201 [2024-11-15 11:39:09.440486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:29.201 [2024-11-15 11:39:09.440498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:29.201 [2024-11-15 11:39:09.440597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28311c0 (9): Bad file descriptor 00:20:29.201 [2024-11-15 11:39:09.440752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.440775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.440803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.440820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.440839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.440854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.440870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.440885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.440901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.440916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.440933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.440947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.440963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.440977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.440994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.441008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.441024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.441038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.441054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.441068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.441084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.441098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.441114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.441128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.441145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.441159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.441175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.441193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.441210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.441225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.441241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.441255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.441271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.441296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.441321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.441337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.441353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.441367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.441383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.441397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.441414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.201 [2024-11-15 11:39:09.441429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.201 [2024-11-15 11:39:09.441445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.441459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.441475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.441489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.441505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.441520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.441535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.441550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.441566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.441581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.441611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.441626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.441642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.441656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.441673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.441687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.441703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.441718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.441733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.441748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.441764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.441778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.441795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.441809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.441825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.441839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.441856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.441870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.441886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.441900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.441917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.441931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.441947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.441961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.441978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.202 [2024-11-15 11:39:09.442714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.202 [2024-11-15 11:39:09.442730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.442746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.442762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.442777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.442793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.442811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.442827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x280c6f0 is same with the state(6) to be set 00:20:29.203 [2024-11-15 11:39:09.444077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.444974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.444990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.445006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.445020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.445036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.445050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.445065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.445080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.445096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.445110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.445126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.445139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.445155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.445170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.445185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.445200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.203 [2024-11-15 11:39:09.445216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.203 [2024-11-15 11:39:09.445234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.445983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.445997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.446013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.446032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.446050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.446064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.446081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.446095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.446110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x280ef50 is same with the state(6) to be set 00:20:29.204 [2024-11-15 11:39:09.447381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.447406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.447426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.447442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.447458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.447472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.447489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.447504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.447520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.447534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.447550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.447565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.447581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.447605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.447621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.447636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.447652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.447667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.447683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.447703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.447721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.447735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.204 [2024-11-15 11:39:09.447751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.204 [2024-11-15 11:39:09.447766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.447782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.447797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.447812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.447826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.447843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.447857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.447873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.447887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.447903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.447917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.447933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.447947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.447963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.447977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.447993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.205 [2024-11-15 11:39:09.448838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.205 [2024-11-15 11:39:09.448853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.448868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.448886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.448903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.448917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.448943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.448957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.448973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.448988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.449011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.449026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.449041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.449056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.449072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.449086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.449102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.449115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.449131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.449145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.449161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.449174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.449190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.449204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.449219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.449233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.449249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.449262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.449282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.449297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.449332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.449347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.449363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.449377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.449393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.449407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.449422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2810500 is same with the state(6) to be set 00:20:29.206 [2024-11-15 11:39:09.450661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:29.206 [2024-11-15 11:39:09.450691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:29.206 [2024-11-15 11:39:09.450712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:29.206 [2024-11-15 11:39:09.451097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.206 [2024-11-15 11:39:09.451128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2837f70 with addr=10.0.0.2, port=4420 00:20:29.206 [2024-11-15 11:39:09.451146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2837f70 is same with the state(6) to be set 00:20:29.206 [2024-11-15 11:39:09.451242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.206 [2024-11-15 11:39:09.451268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2374110 with addr=10.0.0.2, port=4420 00:20:29.206 [2024-11-15 11:39:09.451284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2374110 is same with the state(6) to be set 00:20:29.206 [2024-11-15 11:39:09.451377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.206 [2024-11-15 11:39:09.451404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28371f0 with addr=10.0.0.2, port=4420 00:20:29.206 [2024-11-15 11:39:09.451420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28371f0 is same with the state(6) to be set 00:20:29.206 [2024-11-15 11:39:09.452317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.452341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.452361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.452377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.452393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.452409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.452430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.452446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.452461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.452476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.452492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.452506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.452522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.452536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.452552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.452566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.452582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.452605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.452621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.452636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.452662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.452677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.452693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.452707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.452723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.452737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.452754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.452768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.452784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.452798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.452814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.452832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.206 [2024-11-15 11:39:09.452849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.206 [2024-11-15 11:39:09.452863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.452879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.452894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.452910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.452924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.452940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.452954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.452970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.452985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.453978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.453992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.454013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.454028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.454043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.454057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.454073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.454088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.207 [2024-11-15 11:39:09.454104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.207 [2024-11-15 11:39:09.454119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.208 [2024-11-15 11:39:09.454135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.208 [2024-11-15 11:39:09.454150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.208 [2024-11-15 11:39:09.454166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.208 [2024-11-15 11:39:09.454181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.208 [2024-11-15 11:39:09.454197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.208 [2024-11-15 11:39:09.454211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.208 [2024-11-15 11:39:09.454227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.208 [2024-11-15 11:39:09.454242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.208 [2024-11-15 11:39:09.454258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.208 [2024-11-15 11:39:09.454272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.208 [2024-11-15 11:39:09.454300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.208 [2024-11-15 11:39:09.454329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.208 [2024-11-15 11:39:09.454346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:29.208 [2024-11-15 11:39:09.454360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.208 [2024-11-15 11:39:09.454375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2812fd0 is same with the state(6) to be set 00:20:29.208 [2024-11-15 11:39:09.456042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:29.208 [2024-11-15 11:39:09.456075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:29.208 [2024-11-15 11:39:09.456093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:29.208 [2024-11-15 11:39:09.456116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:29.208 [2024-11-15 11:39:09.456133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:29.208 [2024-11-15 11:39:09.456149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:29.208 task offset: 27392 on job bdev=Nvme1n1 fails 00:20:29.208 00:20:29.208 Latency(us) 00:20:29.208 [2024-11-15T10:39:09.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.208 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:29.208 Job: Nvme1n1 ended in about 0.90 seconds with error 00:20:29.208 Verification LBA range: start 0x0 length 0x400 00:20:29.208 Nvme1n1 : 0.90 213.24 13.33 71.08 0.00 222542.93 4611.79 228356.55 00:20:29.208 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:29.208 Job: Nvme2n1 ended in about 0.92 seconds with error 00:20:29.208 Verification LBA range: start 0x0 length 0x400 00:20:29.208 Nvme2n1 : 0.92 138.99 8.69 69.50 0.00 297573.14 18932.62 273406.48 00:20:29.208 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:29.208 Job: Nvme3n1 ended in about 0.91 seconds with error 00:20:29.208 Verification LBA range: start 0x0 length 0x400 00:20:29.208 Nvme3n1 : 0.91 212.13 13.26 70.71 0.00 214549.81 6699.24 251658.24 00:20:29.208 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:29.208 Job: Nvme4n1 ended in about 0.93 seconds with error 00:20:29.208 Verification LBA range: start 0x0 length 0x400 00:20:29.208 Nvme4n1 : 0.93 211.72 13.23 68.78 0.00 212212.21 18252.99 239230.67 00:20:29.208 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:29.208 Job: Nvme5n1 ended in about 0.92 seconds with error 00:20:29.208 Verification LBA range: start 0x0 length 0x400 00:20:29.208 Nvme5n1 : 0.92 145.06 9.07 69.80 0.00 270967.64 18058.81 236123.78 00:20:29.208 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:29.208 Job: Nvme6n1 ended in about 0.93 seconds with error 00:20:29.208 Verification LBA range: start 0x0 length 0x400 00:20:29.208 Nvme6n1 : 0.93 137.08 8.57 68.54 0.00 277670.87 24466.77 253211.69 00:20:29.208 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:29.208 Job: Nvme7n1 ended in about 0.94 seconds with error 00:20:29.208 Verification LBA range: start 0x0 length 0x400 00:20:29.208 Nvme7n1 : 0.94 136.60 8.54 68.30 0.00 272771.10 19126.80 251658.24 00:20:29.208 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:29.208 Job: Nvme8n1 ended in about 0.91 seconds with error 00:20:29.208 Verification LBA range: start 0x0 length 0x400 00:20:29.208 Nvme8n1 : 0.91 210.53 13.16 70.18 0.00 193684.95 5072.97 250104.79 00:20:29.208 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:29.208 Job: Nvme9n1 ended in about 0.94 seconds with error 00:20:29.208 Verification LBA range: start 0x0 length 0x400 00:20:29.208 Nvme9n1 : 0.94 135.88 8.49 67.94 0.00 262278.26 20874.43 267192.70 00:20:29.208 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:29.208 Job: Nvme10n1 ended in about 0.92 seconds with error 00:20:29.208 Verification LBA range: start 0x0 length 0x400 00:20:29.208 Nvme10n1 : 0.92 138.45 8.65 69.23 0.00 250452.45 23204.60 292047.83 00:20:29.208 [2024-11-15T10:39:09.635Z] =================================================================================================================== 00:20:29.208 [2024-11-15T10:39:09.635Z] Total : 1679.67 104.98 694.05 0.00 243142.86 4611.79 292047.83 00:20:29.208 [2024-11-15 11:39:09.482910] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:29.208 [2024-11-15 11:39:09.483005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:29.208 [2024-11-15 11:39:09.483134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2837f70 (9): Bad file descriptor 00:20:29.208 [2024-11-15 11:39:09.483167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2374110 (9): Bad file descriptor 00:20:29.208 [2024-11-15 11:39:09.483187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28371f0 (9): Bad file descriptor 00:20:29.208 [2024-11-15 11:39:09.483602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.208 [2024-11-15 11:39:09.483640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2830090 with addr=10.0.0.2, port=4420 00:20:29.208 [2024-11-15 11:39:09.483668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2830090 is same with the state(6) to be set 00:20:29.208 [2024-11-15 11:39:09.483758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.208 [2024-11-15 11:39:09.483784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2858060 with addr=10.0.0.2, port=4420 00:20:29.208 [2024-11-15 11:39:09.483801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2858060 is same with the state(6) to be set 00:20:29.208 [2024-11-15 11:39:09.483889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.208 [2024-11-15 11:39:09.483918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2403220 with addr=10.0.0.2, port=4420 00:20:29.208 [2024-11-15 11:39:09.483934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2403220 is same with the state(6) to be set 00:20:29.208 [2024-11-15 11:39:09.484012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.208 [2024-11-15 11:39:09.484038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240c6f0 with addr=10.0.0.2, port=4420 00:20:29.208 [2024-11-15 11:39:09.484054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240c6f0 is same with the state(6) to be set 00:20:29.208 [2024-11-15 11:39:09.484128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.208 [2024-11-15 11:39:09.484155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2830e90 with addr=10.0.0.2, port=4420 00:20:29.208 [2024-11-15 11:39:09.484171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2830e90 is same with the state(6) to be set 00:20:29.208 [2024-11-15 11:39:09.484270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.208 [2024-11-15 11:39:09.484312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240a0b0 with addr=10.0.0.2, port=4420 00:20:29.208 [2024-11-15 11:39:09.484330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240a0b0 is same with the state(6) to be set 00:20:29.208 [2024-11-15 11:39:09.484404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.208 [2024-11-15 11:39:09.484430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28311c0 with addr=10.0.0.2, port=4420 00:20:29.208 [2024-11-15 11:39:09.484446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28311c0 is same with the state(6) to be set 00:20:29.208 [2024-11-15 11:39:09.484461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:29.208 [2024-11-15 11:39:09.484475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:29.208 [2024-11-15 11:39:09.484491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:29.208 [2024-11-15 11:39:09.484508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:29.208 [2024-11-15 11:39:09.484525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:29.208 [2024-11-15 11:39:09.484538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:29.208 [2024-11-15 11:39:09.484557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:29.208 [2024-11-15 11:39:09.484571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:29.208 [2024-11-15 11:39:09.484585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:29.208 [2024-11-15 11:39:09.484604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:29.208 [2024-11-15 11:39:09.484617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:29.208 [2024-11-15 11:39:09.484630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:29.209 [2024-11-15 11:39:09.485054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2830090 (9): Bad file descriptor 00:20:29.209 [2024-11-15 11:39:09.485086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2858060 (9): Bad file descriptor 00:20:29.209 [2024-11-15 11:39:09.485106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2403220 (9): Bad file descriptor 00:20:29.209 [2024-11-15 11:39:09.485125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240c6f0 (9): Bad file descriptor 00:20:29.209 [2024-11-15 11:39:09.485142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2830e90 (9): Bad file descriptor 00:20:29.209 [2024-11-15 11:39:09.485160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240a0b0 (9): Bad file descriptor 00:20:29.209 [2024-11-15 11:39:09.485177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28311c0 (9): Bad file descriptor 00:20:29.209 [2024-11-15 11:39:09.485233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:29.209 [2024-11-15 11:39:09.485258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:29.209 [2024-11-15 11:39:09.485276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:29.209 [2024-11-15 11:39:09.485331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:29.209 [2024-11-15 11:39:09.485350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:29.209 [2024-11-15 11:39:09.485365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:29.209 [2024-11-15 11:39:09.485378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:29.209 [2024-11-15 11:39:09.485393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:29.209 [2024-11-15 11:39:09.485406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:29.209 [2024-11-15 11:39:09.485418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:29.209 [2024-11-15 11:39:09.485431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:29.209 [2024-11-15 11:39:09.485444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:29.209 [2024-11-15 11:39:09.485457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:29.209 [2024-11-15 11:39:09.485470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:29.209 [2024-11-15 11:39:09.485483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:29.209 [2024-11-15 11:39:09.485496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:29.209 [2024-11-15 11:39:09.485514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:29.209 [2024-11-15 11:39:09.485528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:29.209 [2024-11-15 11:39:09.485541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:29.209 [2024-11-15 11:39:09.485555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:29.209 [2024-11-15 11:39:09.485567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:29.209 [2024-11-15 11:39:09.485580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:29.209 [2024-11-15 11:39:09.485592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:29.209 [2024-11-15 11:39:09.485606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:29.209 [2024-11-15 11:39:09.485619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:29.209 [2024-11-15 11:39:09.485632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:29.209 [2024-11-15 11:39:09.485645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:29.209 [2024-11-15 11:39:09.485658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:29.209 [2024-11-15 11:39:09.485671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:29.209 [2024-11-15 11:39:09.485683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:29.209 [2024-11-15 11:39:09.485695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:29.209 [2024-11-15 11:39:09.485808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.209 [2024-11-15 11:39:09.485835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28371f0 with addr=10.0.0.2, port=4420 00:20:29.209 [2024-11-15 11:39:09.485852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28371f0 is same with the state(6) to be set 00:20:29.209 [2024-11-15 11:39:09.485934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.209 [2024-11-15 11:39:09.485959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2374110 with addr=10.0.0.2, port=4420 00:20:29.209 [2024-11-15 11:39:09.485975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2374110 is same with the state(6) to be set 00:20:29.209 [2024-11-15 11:39:09.486050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.209 [2024-11-15 11:39:09.486075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2837f70 with addr=10.0.0.2, port=4420 00:20:29.209 [2024-11-15 11:39:09.486091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2837f70 is same with the state(6) to be set 00:20:29.209 [2024-11-15 11:39:09.486135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28371f0 (9): Bad file descriptor 00:20:29.209 [2024-11-15 11:39:09.486161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2374110 (9): Bad file descriptor 00:20:29.209 [2024-11-15 11:39:09.486179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2837f70 (9): Bad file descriptor 00:20:29.209 [2024-11-15 11:39:09.486219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:29.209 [2024-11-15 11:39:09.486238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:29.209 [2024-11-15 11:39:09.486266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:29.209 [2024-11-15 11:39:09.486281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:29.209 [2024-11-15 11:39:09.486295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:29.209 [2024-11-15 11:39:09.486318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:29.209 [2024-11-15 11:39:09.486333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:29.209 [2024-11-15 11:39:09.486346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:29.209 [2024-11-15 11:39:09.486360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:29.209 [2024-11-15 11:39:09.486372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:29.209 [2024-11-15 11:39:09.486385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:29.209 [2024-11-15 11:39:09.486397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:29.777 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:30.716 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2974402 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2974402 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2974402 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:30.717 rmmod nvme_tcp 00:20:30.717 rmmod nvme_fabrics 00:20:30.717 rmmod nvme_keyring 00:20:30.717 11:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2974225 ']' 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2974225 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2974225 ']' 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2974225 00:20:30.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2974225) - No such process 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2974225 is not found' 00:20:30.717 Process with pid 2974225 is not found 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.717 11:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.623 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:32.623 00:20:32.623 real 0m7.492s 00:20:32.623 user 0m18.573s 00:20:32.623 sys 0m1.467s 00:20:32.623 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:32.623 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:32.623 ************************************ 00:20:32.623 END TEST nvmf_shutdown_tc3 00:20:32.623 ************************************ 00:20:32.882 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:20:32.882 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:20:32.882 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:32.882 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:32.882 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.882 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:32.882 ************************************ 00:20:32.882 START TEST nvmf_shutdown_tc4 00:20:32.882 ************************************ 00:20:32.882 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:32.883 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:32.883 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:32.883 Found net devices under 0000:09:00.0: cvl_0_0 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:32.883 Found net devices under 0000:09:00.1: cvl_0_1 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:32.883 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:32.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:32.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:20:32.884 00:20:32.884 --- 10.0.0.2 ping statistics --- 00:20:32.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.884 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:32.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:32.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:20:32.884 00:20:32.884 --- 10.0.0.1 ping statistics --- 00:20:32.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.884 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2975779 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2975779 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2975779 ']' 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.884 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:33.143 [2024-11-15 11:39:13.349248] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:20:33.143 [2024-11-15 11:39:13.349366] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.143 [2024-11-15 11:39:13.420687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:33.143 [2024-11-15 11:39:13.476260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.143 [2024-11-15 11:39:13.476315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.143 [2024-11-15 11:39:13.476330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.143 [2024-11-15 11:39:13.476341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.143 [2024-11-15 11:39:13.476350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.143 [2024-11-15 11:39:13.477776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.143 [2024-11-15 11:39:13.477840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:33.143 [2024-11-15 11:39:13.477908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.143 [2024-11-15 11:39:13.477905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:33.401 [2024-11-15 11:39:13.627831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.401 11:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:33.401 Malloc1 00:20:33.401 [2024-11-15 11:39:13.733740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.401 Malloc2 00:20:33.401 Malloc3 00:20:33.660 Malloc4 00:20:33.660 Malloc5 00:20:33.660 Malloc6 00:20:33.660 Malloc7 00:20:33.660 Malloc8 00:20:33.918 Malloc9 00:20:33.918 Malloc10 00:20:33.918 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.918 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:33.918 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:33.918 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:33.918 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2975884 00:20:33.918 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:20:33.918 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:20:33.918 [2024-11-15 11:39:14.265225] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:39.190 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:39.190 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2975779 00:20:39.190 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2975779 ']' 00:20:39.190 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2975779 00:20:39.190 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:20:39.190 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.190 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2975779 00:20:39.190 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:39.190 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:39.190 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2975779' 00:20:39.190 killing process with pid 2975779 00:20:39.191 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2975779 00:20:39.191 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2975779 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 [2024-11-15 11:39:19.265643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1240e60 is same with Write completed with error (sct=0, sc=8) 00:20:39.191 the state(6) to be set 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 [2024-11-15 11:39:19.265711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1240e60 is same with the state(6) to be set 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 [2024-11-15 11:39:19.265736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1240e60 is same with the state(6) to be set 00:20:39.191 [2024-11-15 11:39:19.265748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1240e60 is same with the state(6) to be set 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 [2024-11-15 11:39:19.265760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1240e60 is same with the state(6) to be set 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 [2024-11-15 11:39:19.265772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1240e60 is same with the state(6) to be set 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 [2024-11-15 11:39:19.265926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 [2024-11-15 11:39:19.266601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1241800 is same with Write completed with error (sct=0, sc=8) 00:20:39.191 the state(6) to be set 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 [2024-11-15 11:39:19.266635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1241800 is same with the state(6) to be set 00:20:39.191 starting I/O failed: -6 00:20:39.191 [2024-11-15 11:39:19.266658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1241800 is same with the state(6) to be set 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 [2024-11-15 11:39:19.266671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1241800 is same with starting I/O failed: -6 00:20:39.191 the state(6) to be set 00:20:39.191 [2024-11-15 11:39:19.266685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1241800 is same with Write completed with error (sct=0, sc=8) 00:20:39.191 the state(6) to be set 00:20:39.191 [2024-11-15 11:39:19.266699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1241800 is same with the state(6) to be set 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 [2024-11-15 11:39:19.266711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1241800 is same with the state(6) to be set 00:20:39.191 [2024-11-15 11:39:19.266723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1241800 is same with the state(6) to be set 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 [2024-11-15 11:39:19.266735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1241800 is same with the state(6) to be set 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 [2024-11-15 11:39:19.266940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 starting I/O failed: -6 00:20:39.191 [2024-11-15 11:39:19.267659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e7c0 is same with the state(6) to be set 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 [2024-11-15 11:39:19.267687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e7c0 is same with the state(6) to be set 00:20:39.191 Write completed with error (sct=0, sc=8) 00:20:39.191 [2024-11-15 11:39:19.267701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e7c0 is same with the state(6) to be set 00:20:39.191 starting I/O failed: -6 00:20:39.192 [2024-11-15 11:39:19.267720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e7c0 is same with the state(6) to be set 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 [2024-11-15 11:39:19.267732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e7c0 is same with starting I/O failed: -6 00:20:39.192 the state(6) to be set 00:20:39.192 [2024-11-15 11:39:19.267745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e7c0 is same with Write completed with error (sct=0, sc=8) 00:20:39.192 the state(6) to be set 00:20:39.192 starting I/O failed: -6 00:20:39.192 [2024-11-15 11:39:19.267758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e7c0 is same with Write completed with error (sct=0, sc=8) 00:20:39.192 the state(6) to be set 00:20:39.192 [2024-11-15 11:39:19.267786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e7c0 is same with the state(6) to be set 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 [2024-11-15 11:39:19.267799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e7c0 is same with the state(6) to be set 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 [2024-11-15 11:39:19.268123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 [2024-11-15 11:39:19.268334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119ec90 is same with the state(6) to be set 00:20:39.192 [2024-11-15 11:39:19.268369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119ec90 is same with the state(6) to be set 00:20:39.192 [2024-11-15 11:39:19.268385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119ec90 is same with the state(6) to be set 00:20:39.192 [2024-11-15 11:39:19.268398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119ec90 is same with Write completed with error (sct=0, sc=8) 00:20:39.192 the state(6) to be set 00:20:39.192 starting I/O failed: -6 00:20:39.192 [2024-11-15 11:39:19.268412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119ec90 is same with the state(6) to be set 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 [2024-11-15 11:39:19.268425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119ec90 is same with the state(6) to be set 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 [2024-11-15 11:39:19.268815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119f160 is same with Write completed with error (sct=0, sc=8) 00:20:39.192 the state(6) to be set 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 [2024-11-15 11:39:19.268854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119f160 is same with the state(6) to be set 00:20:39.192 starting I/O failed: -6 00:20:39.192 [2024-11-15 11:39:19.268868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119f160 is same with the state(6) to be set 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 [2024-11-15 11:39:19.268881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119f160 is same with the state(6) to be set 00:20:39.192 starting I/O failed: -6 00:20:39.192 [2024-11-15 11:39:19.268893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119f160 is same with the state(6) to be set 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 [2024-11-15 11:39:19.268910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119f160 is same with starting I/O failed: -6 00:20:39.192 the state(6) to be set 00:20:39.192 [2024-11-15 11:39:19.268924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119f160 is same with Write completed with error (sct=0, sc=8) 00:20:39.192 the state(6) to be set 00:20:39.192 starting I/O failed: -6 00:20:39.192 [2024-11-15 11:39:19.268937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119f160 is same with the state(6) to be set 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 [2024-11-15 11:39:19.269336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e2f0 is same with starting I/O failed: -6 00:20:39.192 the state(6) to be set 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.192 Write completed with error (sct=0, sc=8) 00:20:39.192 starting I/O failed: -6 00:20:39.193 [2024-11-15 11:39:19.269391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e2f0 is same with the state(6) to be set 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 [2024-11-15 11:39:19.269410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e2f0 is same with the state(6) to be set 00:20:39.193 starting I/O failed: -6 00:20:39.193 [2024-11-15 11:39:19.269423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e2f0 is same with the state(6) to be set 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 [2024-11-15 11:39:19.269435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e2f0 is same with the state(6) to be set 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 [2024-11-15 11:39:19.269447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e2f0 is same with the state(6) to be set 00:20:39.193 starting I/O failed: -6 00:20:39.193 [2024-11-15 11:39:19.269460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e2f0 is same with the state(6) to be set 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 [2024-11-15 11:39:19.269472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e2f0 is same with starting I/O failed: -6 00:20:39.193 the state(6) to be set 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 [2024-11-15 11:39:19.269988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:39.193 NVMe io qpair process completion error 00:20:39.193 [2024-11-15 11:39:19.272924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d5d0 is same with the state(6) to be set 00:20:39.193 [2024-11-15 11:39:19.272959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d5d0 is same with the state(6) to be set 00:20:39.193 [2024-11-15 11:39:19.272975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d5d0 is same with the state(6) to be set 00:20:39.193 [2024-11-15 11:39:19.272988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d5d0 is same with the state(6) to be set 00:20:39.193 [2024-11-15 11:39:19.273000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d5d0 is same with the state(6) to be set 00:20:39.193 [2024-11-15 11:39:19.273013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d5d0 is same with the state(6) to be set 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 [2024-11-15 11:39:19.274916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 [2024-11-15 11:39:19.275981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.193 starting I/O failed: -6 00:20:39.193 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 [2024-11-15 11:39:19.277111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 [2024-11-15 11:39:19.278803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:39.194 NVMe io qpair process completion error 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 starting I/O failed: -6 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.194 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 [2024-11-15 11:39:19.279730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12559f0 is same with Write completed with error (sct=0, sc=8) 00:20:39.195 the state(6) to be set 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 [2024-11-15 11:39:19.279769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12559f0 is same with the state(6) to be set 00:20:39.195 [2024-11-15 11:39:19.279792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12559f0 is same with Write completed with error (sct=0, sc=8) 00:20:39.195 the state(6) to be set 00:20:39.195 [2024-11-15 11:39:19.279805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12559f0 is same with the state(6) to be set 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 [2024-11-15 11:39:19.279879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1254b80 is same with the state(6) to be set 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 [2024-11-15 11:39:19.279909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1254b80 is same with the state(6) to be set 00:20:39.195 [2024-11-15 11:39:19.279924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1254b80 is same with the state(6) to be set 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 [2024-11-15 11:39:19.279936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1254b80 is same with the state(6) to be set 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 [2024-11-15 11:39:19.279948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1254b80 is same with the state(6) to be set 00:20:39.195 [2024-11-15 11:39:19.279960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1254b80 is same with Write completed with error (sct=0, sc=8) 00:20:39.195 the state(6) to be set 00:20:39.195 starting I/O failed: -6 00:20:39.195 [2024-11-15 11:39:19.279974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1254b80 is same with the state(6) to be set 00:20:39.195 [2024-11-15 11:39:19.279988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1254b80 is same with Write completed with error (sct=0, sc=8) 00:20:39.195 the state(6) to be set 00:20:39.195 [2024-11-15 11:39:19.280001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1254b80 is same with the state(6) to be set 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 [2024-11-15 11:39:19.280102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:39.195 [2024-11-15 11:39:19.280395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0e40 is same with the state(6) to be set 00:20:39.195 [2024-11-15 11:39:19.280429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0e40 is same with the state(6) to be set 00:20:39.195 starting I/O failed: -6 00:20:39.195 [2024-11-15 11:39:19.280450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0e40 is same with the state(6) to be set 00:20:39.195 [2024-11-15 11:39:19.280463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0e40 is same with the state(6) to be set 00:20:39.195 [2024-11-15 11:39:19.280475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0e40 is same with the state(6) to be set 00:20:39.195 [2024-11-15 11:39:19.280492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0e40 is same with the state(6) to be set 00:20:39.195 [2024-11-15 11:39:19.280509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0e40 is same with the state(6) to be set 00:20:39.195 starting I/O failed: -6 00:20:39.195 [2024-11-15 11:39:19.280521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a0e40 is same with the state(6) to be set 00:20:39.195 starting I/O failed: -6 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 [2024-11-15 11:39:19.281427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 [2024-11-15 11:39:19.281612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a17e0 is same with the state(6) to be set 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 [2024-11-15 11:39:19.281639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a17e0 is same with the state(6) to be set 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 [2024-11-15 11:39:19.281654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a17e0 is same with the state(6) to be set 00:20:39.195 [2024-11-15 11:39:19.281666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a17e0 is same with the state(6) to be set 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 [2024-11-15 11:39:19.281678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a17e0 is same with starting I/O failed: -6 00:20:39.195 the state(6) to be set 00:20:39.195 [2024-11-15 11:39:19.281691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a17e0 is same with Write completed with error (sct=0, sc=8) 00:20:39.195 the state(6) to be set 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.195 starting I/O failed: -6 00:20:39.195 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 [2024-11-15 11:39:19.282632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.196 Write completed with error (sct=0, sc=8) 00:20:39.196 starting I/O failed: -6 00:20:39.197 [2024-11-15 11:39:19.284410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:39.197 NVMe io qpair process completion error 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 [2024-11-15 11:39:19.285667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 [2024-11-15 11:39:19.286727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.197 Write completed with error (sct=0, sc=8) 00:20:39.197 starting I/O failed: -6 00:20:39.198 [2024-11-15 11:39:19.287931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 [2024-11-15 11:39:19.289997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:39.198 NVMe io qpair process completion error 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 [2024-11-15 11:39:19.291370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 Write completed with error (sct=0, sc=8) 00:20:39.198 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 [2024-11-15 11:39:19.292431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 [2024-11-15 11:39:19.293638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.199 starting I/O failed: -6 00:20:39.199 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 [2024-11-15 11:39:19.295729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:39.200 NVMe io qpair process completion error 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 [2024-11-15 11:39:19.297017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 [2024-11-15 11:39:19.298047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 starting I/O failed: -6 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.200 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 [2024-11-15 11:39:19.299252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 [2024-11-15 11:39:19.302258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:39.201 NVMe io qpair process completion error 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.201 Write completed with error (sct=0, sc=8) 00:20:39.201 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 [2024-11-15 11:39:19.304215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 [2024-11-15 11:39:19.305482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.202 starting I/O failed: -6 00:20:39.202 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 [2024-11-15 11:39:19.308593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:39.203 NVMe io qpair process completion error 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 [2024-11-15 11:39:19.309915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 [2024-11-15 11:39:19.310960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 starting I/O failed: -6 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.203 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 [2024-11-15 11:39:19.312155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 [2024-11-15 11:39:19.313849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:39.204 NVMe io qpair process completion error 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 Write completed with error (sct=0, sc=8) 00:20:39.204 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 [2024-11-15 11:39:19.315337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 [2024-11-15 11:39:19.316409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 [2024-11-15 11:39:19.317566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.205 Write completed with error (sct=0, sc=8) 00:20:39.205 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 [2024-11-15 11:39:19.319957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:39.206 NVMe io qpair process completion error 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 [2024-11-15 11:39:19.321387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:39.206 starting I/O failed: -6 00:20:39.206 starting I/O failed: -6 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 [2024-11-15 11:39:19.322498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.206 starting I/O failed: -6 00:20:39.206 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 [2024-11-15 11:39:19.323615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 Write completed with error (sct=0, sc=8) 00:20:39.207 starting I/O failed: -6 00:20:39.207 [2024-11-15 11:39:19.327429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:39.207 NVMe io qpair process completion error 00:20:39.207 Initializing NVMe Controllers 00:20:39.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:20:39.207 Controller IO queue size 128, less than required. 00:20:39.207 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:20:39.207 Controller IO queue size 128, less than required. 00:20:39.207 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:20:39.207 Controller IO queue size 128, less than required. 00:20:39.207 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:20:39.207 Controller IO queue size 128, less than required. 00:20:39.207 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.207 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:20:39.207 Controller IO queue size 128, less than required. 00:20:39.207 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:20:39.208 Controller IO queue size 128, less than required. 00:20:39.208 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:20:39.208 Controller IO queue size 128, less than required. 00:20:39.208 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:20:39.208 Controller IO queue size 128, less than required. 00:20:39.208 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:39.208 Controller IO queue size 128, less than required. 00:20:39.208 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:20:39.208 Controller IO queue size 128, less than required. 00:20:39.208 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:20:39.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:20:39.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:20:39.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:20:39.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:20:39.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:20:39.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:20:39.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:20:39.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:39.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:20:39.208 Initialization complete. Launching workers. 00:20:39.208 ======================================================== 00:20:39.208 Latency(us) 00:20:39.208 Device Information : IOPS MiB/s Average min max 00:20:39.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1807.12 77.65 70852.66 1122.59 120994.02 00:20:39.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1844.45 79.25 69446.65 755.74 120412.70 00:20:39.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1826.64 78.49 70147.98 1118.99 121380.64 00:20:39.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1811.37 77.83 70771.10 908.91 124262.67 00:20:39.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1809.67 77.76 70866.95 1066.26 118401.96 00:20:39.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1805.21 77.57 71079.82 1220.29 130976.63 00:20:39.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1756.22 75.46 73109.76 1079.19 134096.45 00:20:39.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1786.97 76.78 71878.93 918.22 136631.54 00:20:39.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1783.79 76.65 71202.10 1052.68 118370.26 00:20:39.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1780.19 76.49 72137.14 856.27 136841.68 00:20:39.208 ======================================================== 00:20:39.208 Total : 18011.63 773.94 71136.67 755.74 136841.68 00:20:39.208 00:20:39.208 [2024-11-15 11:39:19.333651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf900 is same with the state(6) to be set 00:20:39.208 [2024-11-15 11:39:19.333753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede920 is same with the state(6) to be set 00:20:39.208 [2024-11-15 11:39:19.333812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd9e0 is same with the state(6) to be set 00:20:39.208 [2024-11-15 11:39:19.333873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede2c0 is same with the state(6) to be set 00:20:39.208 [2024-11-15 11:39:19.333943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede5f0 is same with the state(6) to be set 00:20:39.208 [2024-11-15 11:39:19.334002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edfae0 is same with the state(6) to be set 00:20:39.208 [2024-11-15 11:39:19.334062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eddd10 is same with the state(6) to be set 00:20:39.208 [2024-11-15 11:39:19.334126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edd6b0 is same with the state(6) to be set 00:20:39.208 [2024-11-15 11:39:19.334184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edf720 is same with the state(6) to be set 00:20:39.208 [2024-11-15 11:39:19.334242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edec50 is same with the state(6) to be set 00:20:39.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:39.467 11:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2975884 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2975884 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2975884 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:40.406 rmmod nvme_tcp 00:20:40.406 rmmod nvme_fabrics 00:20:40.406 rmmod nvme_keyring 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2975779 ']' 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2975779 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2975779 ']' 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2975779 00:20:40.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2975779) - No such process 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2975779 is not found' 00:20:40.406 Process with pid 2975779 is not found 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.406 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.943 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:42.943 00:20:42.943 real 0m9.762s 00:20:42.943 user 0m23.849s 00:20:42.943 sys 0m5.696s 00:20:42.943 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.943 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:42.943 ************************************ 00:20:42.943 END TEST nvmf_shutdown_tc4 00:20:42.943 ************************************ 00:20:42.943 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:20:42.943 00:20:42.943 real 0m37.429s 00:20:42.943 user 1m41.987s 00:20:42.943 sys 0m12.122s 00:20:42.943 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.943 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:42.943 ************************************ 00:20:42.943 END TEST nvmf_shutdown 00:20:42.943 ************************************ 00:20:42.943 11:39:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:42.943 11:39:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:42.943 11:39:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.943 11:39:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:42.943 ************************************ 00:20:42.943 START TEST nvmf_nsid 00:20:42.943 ************************************ 00:20:42.943 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:42.943 * Looking for test storage... 00:20:42.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:42.943 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:42.943 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:42.943 11:39:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:42.943 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:42.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.944 --rc genhtml_branch_coverage=1 00:20:42.944 --rc genhtml_function_coverage=1 00:20:42.944 --rc genhtml_legend=1 00:20:42.944 --rc geninfo_all_blocks=1 00:20:42.944 --rc geninfo_unexecuted_blocks=1 00:20:42.944 00:20:42.944 ' 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:42.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.944 --rc genhtml_branch_coverage=1 00:20:42.944 --rc genhtml_function_coverage=1 00:20:42.944 --rc genhtml_legend=1 00:20:42.944 --rc geninfo_all_blocks=1 00:20:42.944 --rc geninfo_unexecuted_blocks=1 00:20:42.944 00:20:42.944 ' 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:42.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.944 --rc genhtml_branch_coverage=1 00:20:42.944 --rc genhtml_function_coverage=1 00:20:42.944 --rc genhtml_legend=1 00:20:42.944 --rc geninfo_all_blocks=1 00:20:42.944 --rc geninfo_unexecuted_blocks=1 00:20:42.944 00:20:42.944 ' 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:42.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.944 --rc genhtml_branch_coverage=1 00:20:42.944 --rc genhtml_function_coverage=1 00:20:42.944 --rc genhtml_legend=1 00:20:42.944 --rc geninfo_all_blocks=1 00:20:42.944 --rc geninfo_unexecuted_blocks=1 00:20:42.944 00:20:42.944 ' 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:20:42.944 11:39:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:44.904 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:44.904 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:44.904 Found net devices under 0000:09:00.0: cvl_0_0 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:44.904 Found net devices under 0000:09:00.1: cvl_0_1 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.904 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:44.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:20:44.905 00:20:44.905 --- 10.0.0.2 ping statistics --- 00:20:44.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.905 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:20:44.905 00:20:44.905 --- 10.0.0.1 ping statistics --- 00:20:44.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.905 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2978624 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2978624 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2978624 ']' 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.905 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:44.905 [2024-11-15 11:39:25.301545] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:20:44.905 [2024-11-15 11:39:25.301640] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.163 [2024-11-15 11:39:25.374946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.163 [2024-11-15 11:39:25.433194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.163 [2024-11-15 11:39:25.433246] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.163 [2024-11-15 11:39:25.433260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.163 [2024-11-15 11:39:25.433270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.163 [2024-11-15 11:39:25.433280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.163 [2024-11-15 11:39:25.433917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2978654 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=2a38a477-9ead-4132-908c-1a50708f71e0 00:20:45.163 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:45.421 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=63847454-eec4-4c83-bac8-c9757eb70f02 00:20:45.421 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:45.421 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=c741e856-3383-456e-b4d4-b371b9a1ce52 00:20:45.421 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:45.421 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.421 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:45.421 null0 00:20:45.421 null1 00:20:45.421 null2 00:20:45.421 [2024-11-15 11:39:25.623740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.421 [2024-11-15 11:39:25.637112] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:20:45.422 [2024-11-15 11:39:25.637192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2978654 ] 00:20:45.422 [2024-11-15 11:39:25.647926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.422 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.422 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2978654 /var/tmp/tgt2.sock 00:20:45.422 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2978654 ']' 00:20:45.422 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:45.422 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.422 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:45.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:45.422 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.422 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:45.422 [2024-11-15 11:39:25.704024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.422 [2024-11-15 11:39:25.761850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.680 11:39:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.680 11:39:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:45.680 11:39:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:20:46.245 [2024-11-15 11:39:26.412863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.245 [2024-11-15 11:39:26.429054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:20:46.245 nvme0n1 nvme0n2 00:20:46.245 nvme1n1 00:20:46.245 11:39:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:20:46.245 11:39:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:20:46.245 11:39:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.811 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:46.811 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:46.811 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:46.811 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:46.811 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:20:46.811 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:46.811 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:46.811 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:46.811 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:46.811 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:46.811 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:20:46.811 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:20:46.811 11:39:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:20:47.744 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:47.744 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:47.744 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:47.744 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:47.744 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:47.744 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 2a38a477-9ead-4132-908c-1a50708f71e0 00:20:47.744 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2a38a4779ead4132908c1a50708f71e0 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2A38A4779EAD4132908C1A50708F71E0 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 2A38A4779EAD4132908C1A50708F71E0 == \2\A\3\8\A\4\7\7\9\E\A\D\4\1\3\2\9\0\8\C\1\A\5\0\7\0\8\F\7\1\E\0 ]] 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 63847454-eec4-4c83-bac8-c9757eb70f02 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=63847454eec44c83bac8c9757eb70f02 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 63847454EEC44C83BAC8C9757EB70F02 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 63847454EEC44C83BAC8C9757EB70F02 == \6\3\8\4\7\4\5\4\E\E\C\4\4\C\8\3\B\A\C\8\C\9\7\5\7\E\B\7\0\F\0\2 ]] 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:20:47.745 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid c741e856-3383-456e-b4d4-b371b9a1ce52 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c741e8563383456eb4d4b371b9a1ce52 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C741E8563383456EB4D4B371B9A1CE52 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ C741E8563383456EB4D4B371B9A1CE52 == \C\7\4\1\E\8\5\6\3\3\8\3\4\5\6\E\B\4\D\4\B\3\7\1\B\9\A\1\C\E\5\2 ]] 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2978654 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2978654 ']' 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2978654 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.003 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2978654 00:20:48.260 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:48.260 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:48.260 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2978654' 00:20:48.260 killing process with pid 2978654 00:20:48.260 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2978654 00:20:48.260 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2978654 00:20:48.518 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:20:48.518 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:48.518 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:20:48.518 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:48.518 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:20:48.518 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:48.518 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:48.518 rmmod nvme_tcp 00:20:48.518 rmmod nvme_fabrics 00:20:48.518 rmmod nvme_keyring 00:20:48.518 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:48.518 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:20:48.518 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:20:48.518 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2978624 ']' 00:20:48.518 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2978624 00:20:48.518 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2978624 ']' 00:20:48.518 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2978624 00:20:48.518 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:48.518 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.518 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2978624 00:20:48.776 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:48.776 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:48.776 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2978624' 00:20:48.776 killing process with pid 2978624 00:20:48.776 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2978624 00:20:48.776 11:39:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2978624 00:20:48.776 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:48.776 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:48.776 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:48.776 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:20:49.036 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:20:49.036 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:20:49.036 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:49.036 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:49.036 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:49.036 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.036 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.036 11:39:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.943 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:50.943 00:20:50.943 real 0m8.311s 00:20:50.943 user 0m8.308s 00:20:50.943 sys 0m2.607s 00:20:50.943 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.943 11:39:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:50.943 ************************************ 00:20:50.943 END TEST nvmf_nsid 00:20:50.943 ************************************ 00:20:50.943 11:39:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:50.943 00:20:50.943 real 11m40.475s 00:20:50.943 user 27m45.825s 00:20:50.943 sys 2m45.790s 00:20:50.943 11:39:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.943 11:39:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:50.943 ************************************ 00:20:50.943 END TEST nvmf_target_extra 00:20:50.943 ************************************ 00:20:50.943 11:39:31 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:50.943 11:39:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:50.943 11:39:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.943 11:39:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:50.943 ************************************ 00:20:50.943 START TEST nvmf_host 00:20:50.943 ************************************ 00:20:50.943 11:39:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:50.943 * Looking for test storage... 00:20:51.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:20:51.201 11:39:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:51.201 11:39:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:20:51.201 11:39:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:51.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.202 --rc genhtml_branch_coverage=1 00:20:51.202 --rc genhtml_function_coverage=1 00:20:51.202 --rc genhtml_legend=1 00:20:51.202 --rc geninfo_all_blocks=1 00:20:51.202 --rc geninfo_unexecuted_blocks=1 00:20:51.202 00:20:51.202 ' 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:51.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.202 --rc genhtml_branch_coverage=1 00:20:51.202 --rc genhtml_function_coverage=1 00:20:51.202 --rc genhtml_legend=1 00:20:51.202 --rc geninfo_all_blocks=1 00:20:51.202 --rc geninfo_unexecuted_blocks=1 00:20:51.202 00:20:51.202 ' 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:51.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.202 --rc genhtml_branch_coverage=1 00:20:51.202 --rc genhtml_function_coverage=1 00:20:51.202 --rc genhtml_legend=1 00:20:51.202 --rc geninfo_all_blocks=1 00:20:51.202 --rc geninfo_unexecuted_blocks=1 00:20:51.202 00:20:51.202 ' 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:51.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.202 --rc genhtml_branch_coverage=1 00:20:51.202 --rc genhtml_function_coverage=1 00:20:51.202 --rc genhtml_legend=1 00:20:51.202 --rc geninfo_all_blocks=1 00:20:51.202 --rc geninfo_unexecuted_blocks=1 00:20:51.202 00:20:51.202 ' 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:51.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.202 11:39:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.202 ************************************ 00:20:51.202 START TEST nvmf_multicontroller 00:20:51.202 ************************************ 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:51.203 * Looking for test storage... 00:20:51.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.203 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:51.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.462 --rc genhtml_branch_coverage=1 00:20:51.462 --rc genhtml_function_coverage=1 00:20:51.462 --rc genhtml_legend=1 00:20:51.462 --rc geninfo_all_blocks=1 00:20:51.462 --rc geninfo_unexecuted_blocks=1 00:20:51.462 00:20:51.462 ' 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:51.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.462 --rc genhtml_branch_coverage=1 00:20:51.462 --rc genhtml_function_coverage=1 00:20:51.462 --rc genhtml_legend=1 00:20:51.462 --rc geninfo_all_blocks=1 00:20:51.462 --rc geninfo_unexecuted_blocks=1 00:20:51.462 00:20:51.462 ' 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:51.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.462 --rc genhtml_branch_coverage=1 00:20:51.462 --rc genhtml_function_coverage=1 00:20:51.462 --rc genhtml_legend=1 00:20:51.462 --rc geninfo_all_blocks=1 00:20:51.462 --rc geninfo_unexecuted_blocks=1 00:20:51.462 00:20:51.462 ' 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:51.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.462 --rc genhtml_branch_coverage=1 00:20:51.462 --rc genhtml_function_coverage=1 00:20:51.462 --rc genhtml_legend=1 00:20:51.462 --rc geninfo_all_blocks=1 00:20:51.462 --rc geninfo_unexecuted_blocks=1 00:20:51.462 00:20:51.462 ' 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:51.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:20:51.462 11:39:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.367 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:53.368 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:53.368 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:53.368 Found net devices under 0000:09:00.0: cvl_0_0 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:53.368 Found net devices under 0000:09:00.1: cvl_0_1 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.368 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.626 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.626 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.626 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:53.626 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.626 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.626 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.626 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:53.626 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:53.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:20:53.626 00:20:53.626 --- 10.0.0.2 ping statistics --- 00:20:53.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.626 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:20:53.626 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:20:53.626 00:20:53.626 --- 10.0.0.1 ping statistics --- 00:20:53.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.626 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:20:53.626 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.626 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:20:53.626 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:53.626 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.626 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:53.626 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:53.626 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.627 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:53.627 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:53.627 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:53.627 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:53.627 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.627 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.627 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2981209 00:20:53.627 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:53.627 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2981209 00:20:53.627 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2981209 ']' 00:20:53.627 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.627 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.627 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.627 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.627 11:39:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.627 [2024-11-15 11:39:34.001523] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:20:53.627 [2024-11-15 11:39:34.001604] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.885 [2024-11-15 11:39:34.075902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:53.885 [2024-11-15 11:39:34.136189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.885 [2024-11-15 11:39:34.136244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.885 [2024-11-15 11:39:34.136257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.885 [2024-11-15 11:39:34.136269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.885 [2024-11-15 11:39:34.136279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.885 [2024-11-15 11:39:34.137807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.885 [2024-11-15 11:39:34.137870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.885 [2024-11-15 11:39:34.137874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.885 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.885 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:20:53.885 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:53.885 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:53.885 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.885 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.885 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:53.885 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.885 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:53.885 [2024-11-15 11:39:34.291819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.885 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.885 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:53.885 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.885 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.144 Malloc0 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.144 [2024-11-15 11:39:34.352659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.144 [2024-11-15 11:39:34.360525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.144 Malloc1 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2981232 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2981232 /var/tmp/bdevperf.sock 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2981232 ']' 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.144 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.402 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.402 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:20:54.402 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:54.402 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.402 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.660 NVMe0n1 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.660 1 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.660 request: 00:20:54.660 { 00:20:54.660 "name": "NVMe0", 00:20:54.660 "trtype": "tcp", 00:20:54.660 "traddr": "10.0.0.2", 00:20:54.660 "adrfam": "ipv4", 00:20:54.660 "trsvcid": "4420", 00:20:54.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.660 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:54.660 "hostaddr": "10.0.0.1", 00:20:54.660 "prchk_reftag": false, 00:20:54.660 "prchk_guard": false, 00:20:54.660 "hdgst": false, 00:20:54.660 "ddgst": false, 00:20:54.660 "allow_unrecognized_csi": false, 00:20:54.660 "method": "bdev_nvme_attach_controller", 00:20:54.660 "req_id": 1 00:20:54.660 } 00:20:54.660 Got JSON-RPC error response 00:20:54.660 response: 00:20:54.660 { 00:20:54.660 "code": -114, 00:20:54.660 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:54.660 } 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.660 request: 00:20:54.660 { 00:20:54.660 "name": "NVMe0", 00:20:54.660 "trtype": "tcp", 00:20:54.660 "traddr": "10.0.0.2", 00:20:54.660 "adrfam": "ipv4", 00:20:54.660 "trsvcid": "4420", 00:20:54.660 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:54.660 "hostaddr": "10.0.0.1", 00:20:54.660 "prchk_reftag": false, 00:20:54.660 "prchk_guard": false, 00:20:54.660 "hdgst": false, 00:20:54.660 "ddgst": false, 00:20:54.660 "allow_unrecognized_csi": false, 00:20:54.660 "method": "bdev_nvme_attach_controller", 00:20:54.660 "req_id": 1 00:20:54.660 } 00:20:54.660 Got JSON-RPC error response 00:20:54.660 response: 00:20:54.660 { 00:20:54.660 "code": -114, 00:20:54.660 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:54.660 } 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.660 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.660 request: 00:20:54.660 { 00:20:54.660 "name": "NVMe0", 00:20:54.660 "trtype": "tcp", 00:20:54.660 "traddr": "10.0.0.2", 00:20:54.660 "adrfam": "ipv4", 00:20:54.660 "trsvcid": "4420", 00:20:54.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.660 "hostaddr": "10.0.0.1", 00:20:54.660 "prchk_reftag": false, 00:20:54.660 "prchk_guard": false, 00:20:54.660 "hdgst": false, 00:20:54.660 "ddgst": false, 00:20:54.661 "multipath": "disable", 00:20:54.661 "allow_unrecognized_csi": false, 00:20:54.661 "method": "bdev_nvme_attach_controller", 00:20:54.661 "req_id": 1 00:20:54.661 } 00:20:54.661 Got JSON-RPC error response 00:20:54.661 response: 00:20:54.661 { 00:20:54.661 "code": -114, 00:20:54.661 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:20:54.661 } 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.661 request: 00:20:54.661 { 00:20:54.661 "name": "NVMe0", 00:20:54.661 "trtype": "tcp", 00:20:54.661 "traddr": "10.0.0.2", 00:20:54.661 "adrfam": "ipv4", 00:20:54.661 "trsvcid": "4420", 00:20:54.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.661 "hostaddr": "10.0.0.1", 00:20:54.661 "prchk_reftag": false, 00:20:54.661 "prchk_guard": false, 00:20:54.661 "hdgst": false, 00:20:54.661 "ddgst": false, 00:20:54.661 "multipath": "failover", 00:20:54.661 "allow_unrecognized_csi": false, 00:20:54.661 "method": "bdev_nvme_attach_controller", 00:20:54.661 "req_id": 1 00:20:54.661 } 00:20:54.661 Got JSON-RPC error response 00:20:54.661 response: 00:20:54.661 { 00:20:54.661 "code": -114, 00:20:54.661 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:54.661 } 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.661 11:39:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.661 NVMe0n1 00:20:54.661 11:39:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.661 11:39:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:54.661 11:39:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.661 11:39:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.661 11:39:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.661 11:39:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:54.661 11:39:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.661 11:39:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.918 00:20:54.918 11:39:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.918 11:39:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:54.918 11:39:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:54.918 11:39:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.918 11:39:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.918 11:39:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.918 11:39:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:54.918 11:39:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:56.290 { 00:20:56.290 "results": [ 00:20:56.290 { 00:20:56.290 "job": "NVMe0n1", 00:20:56.290 "core_mask": "0x1", 00:20:56.290 "workload": "write", 00:20:56.290 "status": "finished", 00:20:56.290 "queue_depth": 128, 00:20:56.290 "io_size": 4096, 00:20:56.290 "runtime": 1.005469, 00:20:56.290 "iops": 18264.113562924365, 00:20:56.290 "mibps": 71.3441936051733, 00:20:56.290 "io_failed": 0, 00:20:56.290 "io_timeout": 0, 00:20:56.290 "avg_latency_us": 6997.585291673726, 00:20:56.290 "min_latency_us": 4150.613333333334, 00:20:56.290 "max_latency_us": 14466.465185185185 00:20:56.290 } 00:20:56.290 ], 00:20:56.290 "core_count": 1 00:20:56.290 } 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2981232 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2981232 ']' 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2981232 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2981232 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2981232' 00:20:56.290 killing process with pid 2981232 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2981232 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2981232 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.290 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:56.548 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.548 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:20:56.548 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:56.548 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:20:56.548 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:56.548 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:20:56.548 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:20:56.548 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:56.548 [2024-11-15 11:39:34.469761] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:20:56.548 [2024-11-15 11:39:34.469863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2981232 ] 00:20:56.548 [2024-11-15 11:39:34.540082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.548 [2024-11-15 11:39:34.600422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.548 [2024-11-15 11:39:35.304654] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 389f4b34-1e06-4531-acd9-8824b41ae178 already exists 00:20:56.548 [2024-11-15 11:39:35.304691] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:389f4b34-1e06-4531-acd9-8824b41ae178 alias for bdev NVMe1n1 00:20:56.548 [2024-11-15 11:39:35.304716] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:56.548 Running I/O for 1 seconds... 00:20:56.548 18236.00 IOPS, 71.23 MiB/s 00:20:56.549 Latency(us) 00:20:56.549 [2024-11-15T10:39:36.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.549 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:56.549 NVMe0n1 : 1.01 18264.11 71.34 0.00 0.00 6997.59 4150.61 14466.47 00:20:56.549 [2024-11-15T10:39:36.976Z] =================================================================================================================== 00:20:56.549 [2024-11-15T10:39:36.976Z] Total : 18264.11 71.34 0.00 0.00 6997.59 4150.61 14466.47 00:20:56.549 Received shutdown signal, test time was about 1.000000 seconds 00:20:56.549 00:20:56.549 Latency(us) 00:20:56.549 [2024-11-15T10:39:36.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.549 [2024-11-15T10:39:36.976Z] =================================================================================================================== 00:20:56.549 [2024-11-15T10:39:36.976Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:56.549 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:56.549 rmmod nvme_tcp 00:20:56.549 rmmod nvme_fabrics 00:20:56.549 rmmod nvme_keyring 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2981209 ']' 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2981209 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2981209 ']' 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2981209 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2981209 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2981209' 00:20:56.549 killing process with pid 2981209 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2981209 00:20:56.549 11:39:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2981209 00:20:56.807 11:39:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:56.807 11:39:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:56.807 11:39:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:56.807 11:39:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:20:56.807 11:39:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:20:56.807 11:39:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:56.807 11:39:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:20:56.807 11:39:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:56.807 11:39:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:56.807 11:39:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.807 11:39:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.807 11:39:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.713 11:39:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:58.713 00:20:58.713 real 0m7.615s 00:20:58.713 user 0m11.962s 00:20:58.713 sys 0m2.374s 00:20:58.713 11:39:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:58.713 11:39:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.713 ************************************ 00:20:58.713 END TEST nvmf_multicontroller 00:20:58.713 ************************************ 00:20:58.971 11:39:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:58.971 11:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:58.971 11:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:58.971 11:39:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.971 ************************************ 00:20:58.971 START TEST nvmf_aer 00:20:58.971 ************************************ 00:20:58.971 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:58.971 * Looking for test storage... 00:20:58.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:58.971 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:58.971 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:20:58.971 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:58.971 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:58.971 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:58.971 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:58.971 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:58.971 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:20:58.971 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:20:58.971 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:58.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.972 --rc genhtml_branch_coverage=1 00:20:58.972 --rc genhtml_function_coverage=1 00:20:58.972 --rc genhtml_legend=1 00:20:58.972 --rc geninfo_all_blocks=1 00:20:58.972 --rc geninfo_unexecuted_blocks=1 00:20:58.972 00:20:58.972 ' 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:58.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.972 --rc genhtml_branch_coverage=1 00:20:58.972 --rc genhtml_function_coverage=1 00:20:58.972 --rc genhtml_legend=1 00:20:58.972 --rc geninfo_all_blocks=1 00:20:58.972 --rc geninfo_unexecuted_blocks=1 00:20:58.972 00:20:58.972 ' 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:58.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.972 --rc genhtml_branch_coverage=1 00:20:58.972 --rc genhtml_function_coverage=1 00:20:58.972 --rc genhtml_legend=1 00:20:58.972 --rc geninfo_all_blocks=1 00:20:58.972 --rc geninfo_unexecuted_blocks=1 00:20:58.972 00:20:58.972 ' 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:58.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:58.972 --rc genhtml_branch_coverage=1 00:20:58.972 --rc genhtml_function_coverage=1 00:20:58.972 --rc genhtml_legend=1 00:20:58.972 --rc geninfo_all_blocks=1 00:20:58.972 --rc geninfo_unexecuted_blocks=1 00:20:58.972 00:20:58.972 ' 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:58.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:20:58.972 11:39:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.504 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.504 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:01.504 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:01.504 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:01.504 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:01.504 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:01.504 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:01.504 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:01.504 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:01.504 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:01.504 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:01.504 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:01.504 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:01.504 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:01.504 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:01.504 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:01.505 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:01.505 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:01.505 Found net devices under 0000:09:00.0: cvl_0_0 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:01.505 Found net devices under 0000:09:00.1: cvl_0_1 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:01.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:21:01.505 00:21:01.505 --- 10.0.0.2 ping statistics --- 00:21:01.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.505 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:01.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:21:01.505 00:21:01.505 --- 10.0.0.1 ping statistics --- 00:21:01.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.505 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2983454 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2983454 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2983454 ']' 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:01.505 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.505 [2024-11-15 11:39:41.689616] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:21:01.506 [2024-11-15 11:39:41.689726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.506 [2024-11-15 11:39:41.766330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:01.506 [2024-11-15 11:39:41.829039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.506 [2024-11-15 11:39:41.829099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.506 [2024-11-15 11:39:41.829112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.506 [2024-11-15 11:39:41.829123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.506 [2024-11-15 11:39:41.829144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.506 [2024-11-15 11:39:41.830898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.506 [2024-11-15 11:39:41.831041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.506 [2024-11-15 11:39:41.831105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:01.506 [2024-11-15 11:39:41.831108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.764 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.764 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:01.764 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:01.764 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:01.764 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.764 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.764 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:01.764 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.764 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.764 [2024-11-15 11:39:41.982598] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.764 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.764 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:01.764 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.764 11:39:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.764 Malloc0 00:21:01.764 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.764 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:01.764 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.764 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.764 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.764 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:01.764 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.764 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.764 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.764 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:01.764 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.764 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.764 [2024-11-15 11:39:42.054191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.764 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.764 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:01.764 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.764 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:01.764 [ 00:21:01.764 { 00:21:01.764 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:01.764 "subtype": "Discovery", 00:21:01.764 "listen_addresses": [], 00:21:01.764 "allow_any_host": true, 00:21:01.764 "hosts": [] 00:21:01.764 }, 00:21:01.764 { 00:21:01.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.764 "subtype": "NVMe", 00:21:01.764 "listen_addresses": [ 00:21:01.764 { 00:21:01.764 "trtype": "TCP", 00:21:01.764 "adrfam": "IPv4", 00:21:01.764 "traddr": "10.0.0.2", 00:21:01.764 "trsvcid": "4420" 00:21:01.764 } 00:21:01.764 ], 00:21:01.764 "allow_any_host": true, 00:21:01.764 "hosts": [], 00:21:01.764 "serial_number": "SPDK00000000000001", 00:21:01.764 "model_number": "SPDK bdev Controller", 00:21:01.764 "max_namespaces": 2, 00:21:01.764 "min_cntlid": 1, 00:21:01.764 "max_cntlid": 65519, 00:21:01.764 "namespaces": [ 00:21:01.764 { 00:21:01.764 "nsid": 1, 00:21:01.764 "bdev_name": "Malloc0", 00:21:01.764 "name": "Malloc0", 00:21:01.764 "nguid": "72EA4C854DD440C4BD2E0024FC0A0851", 00:21:01.764 "uuid": "72ea4c85-4dd4-40c4-bd2e-0024fc0a0851" 00:21:01.764 } 00:21:01.765 ] 00:21:01.765 } 00:21:01.765 ] 00:21:01.765 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.765 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:01.765 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:01.765 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2983602 00:21:01.765 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:01.765 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:01.765 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:01.765 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:01.765 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:01.765 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:01.765 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:01.765 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:01.765 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:01.765 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:01.765 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:02.023 Malloc1 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.023 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:02.280 [ 00:21:02.280 { 00:21:02.280 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:02.280 "subtype": "Discovery", 00:21:02.280 "listen_addresses": [], 00:21:02.280 "allow_any_host": true, 00:21:02.280 "hosts": [] 00:21:02.280 }, 00:21:02.280 { 00:21:02.280 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.280 "subtype": "NVMe", 00:21:02.280 "listen_addresses": [ 00:21:02.280 { 00:21:02.280 "trtype": "TCP", 00:21:02.280 "adrfam": "IPv4", 00:21:02.280 "traddr": "10.0.0.2", 00:21:02.280 "trsvcid": "4420" 00:21:02.280 } 00:21:02.280 ], 00:21:02.281 "allow_any_host": true, 00:21:02.281 "hosts": [], 00:21:02.281 "serial_number": "SPDK00000000000001", 00:21:02.281 "model_number": "SPDK bdev Controller", 00:21:02.281 "max_namespaces": 2, 00:21:02.281 "min_cntlid": 1, 00:21:02.281 "max_cntlid": 65519, 00:21:02.281 "namespaces": [ 00:21:02.281 { 00:21:02.281 "nsid": 1, 00:21:02.281 "bdev_name": "Malloc0", 00:21:02.281 "name": "Malloc0", 00:21:02.281 "nguid": "72EA4C854DD440C4BD2E0024FC0A0851", 00:21:02.281 "uuid": "72ea4c85-4dd4-40c4-bd2e-0024fc0a0851" 00:21:02.281 }, 00:21:02.281 { 00:21:02.281 "nsid": 2, 00:21:02.281 "bdev_name": "Malloc1", 00:21:02.281 "name": "Malloc1", 00:21:02.281 "nguid": "4E6693D03F5743AF9894EECE6AACAAF1", 00:21:02.281 "uuid": "4e6693d0-3f57-43af-9894-eece6aacaaf1" 00:21:02.281 } 00:21:02.281 ] 00:21:02.281 } 00:21:02.281 ] 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2983602 00:21:02.281 Asynchronous Event Request test 00:21:02.281 Attaching to 10.0.0.2 00:21:02.281 Attached to 10.0.0.2 00:21:02.281 Registering asynchronous event callbacks... 00:21:02.281 Starting namespace attribute notice tests for all controllers... 00:21:02.281 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:02.281 aer_cb - Changed Namespace 00:21:02.281 Cleaning up... 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:02.281 rmmod nvme_tcp 00:21:02.281 rmmod nvme_fabrics 00:21:02.281 rmmod nvme_keyring 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2983454 ']' 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2983454 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2983454 ']' 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2983454 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2983454 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2983454' 00:21:02.281 killing process with pid 2983454 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2983454 00:21:02.281 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2983454 00:21:02.540 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:02.540 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:02.540 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:02.540 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:02.540 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:02.540 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:02.540 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:02.540 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:02.540 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:02.540 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.540 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.540 11:39:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.085 11:39:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:05.085 00:21:05.085 real 0m5.757s 00:21:05.085 user 0m4.929s 00:21:05.085 sys 0m2.100s 00:21:05.085 11:39:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.085 11:39:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:05.085 ************************************ 00:21:05.085 END TEST nvmf_aer 00:21:05.085 ************************************ 00:21:05.085 11:39:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:05.085 11:39:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:05.085 11:39:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.085 11:39:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.085 ************************************ 00:21:05.085 START TEST nvmf_async_init 00:21:05.085 ************************************ 00:21:05.085 11:39:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:05.085 * Looking for test storage... 00:21:05.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:05.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.086 --rc genhtml_branch_coverage=1 00:21:05.086 --rc genhtml_function_coverage=1 00:21:05.086 --rc genhtml_legend=1 00:21:05.086 --rc geninfo_all_blocks=1 00:21:05.086 --rc geninfo_unexecuted_blocks=1 00:21:05.086 00:21:05.086 ' 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:05.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.086 --rc genhtml_branch_coverage=1 00:21:05.086 --rc genhtml_function_coverage=1 00:21:05.086 --rc genhtml_legend=1 00:21:05.086 --rc geninfo_all_blocks=1 00:21:05.086 --rc geninfo_unexecuted_blocks=1 00:21:05.086 00:21:05.086 ' 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:05.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.086 --rc genhtml_branch_coverage=1 00:21:05.086 --rc genhtml_function_coverage=1 00:21:05.086 --rc genhtml_legend=1 00:21:05.086 --rc geninfo_all_blocks=1 00:21:05.086 --rc geninfo_unexecuted_blocks=1 00:21:05.086 00:21:05.086 ' 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:05.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.086 --rc genhtml_branch_coverage=1 00:21:05.086 --rc genhtml_function_coverage=1 00:21:05.086 --rc genhtml_legend=1 00:21:05.086 --rc geninfo_all_blocks=1 00:21:05.086 --rc geninfo_unexecuted_blocks=1 00:21:05.086 00:21:05.086 ' 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:05.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=80e3676895fc46c1b55f9552203f54d3 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:05.086 11:39:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:06.987 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:06.987 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:06.987 Found net devices under 0000:09:00.0: cvl_0_0 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:06.987 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:06.988 Found net devices under 0000:09:00.1: cvl_0_1 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:06.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:21:06.988 00:21:06.988 --- 10.0.0.2 ping statistics --- 00:21:06.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.988 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:21:06.988 00:21:06.988 --- 10.0.0.1 ping statistics --- 00:21:06.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.988 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2985555 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2985555 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2985555 ']' 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:06.988 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.246 [2024-11-15 11:39:47.453760] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:21:07.246 [2024-11-15 11:39:47.453830] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.246 [2024-11-15 11:39:47.526494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.246 [2024-11-15 11:39:47.583951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.246 [2024-11-15 11:39:47.584000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.246 [2024-11-15 11:39:47.584024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.246 [2024-11-15 11:39:47.584035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.246 [2024-11-15 11:39:47.584046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.246 [2024-11-15 11:39:47.584679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.503 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.503 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:07.503 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:07.503 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:07.503 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.503 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.503 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:07.503 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.503 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.503 [2024-11-15 11:39:47.731389] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.503 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.503 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:07.503 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.503 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.503 null0 00:21:07.503 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.503 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:07.503 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.504 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.504 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.504 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:07.504 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.504 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.504 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.504 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 80e3676895fc46c1b55f9552203f54d3 00:21:07.504 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.504 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.504 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.504 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:07.504 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.504 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.504 [2024-11-15 11:39:47.771660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.504 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.504 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:07.504 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.504 11:39:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.761 nvme0n1 00:21:07.761 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.761 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:07.761 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.761 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.761 [ 00:21:07.761 { 00:21:07.761 "name": "nvme0n1", 00:21:07.761 "aliases": [ 00:21:07.762 "80e36768-95fc-46c1-b55f-9552203f54d3" 00:21:07.762 ], 00:21:07.762 "product_name": "NVMe disk", 00:21:07.762 "block_size": 512, 00:21:07.762 "num_blocks": 2097152, 00:21:07.762 "uuid": "80e36768-95fc-46c1-b55f-9552203f54d3", 00:21:07.762 "numa_id": 0, 00:21:07.762 "assigned_rate_limits": { 00:21:07.762 "rw_ios_per_sec": 0, 00:21:07.762 "rw_mbytes_per_sec": 0, 00:21:07.762 "r_mbytes_per_sec": 0, 00:21:07.762 "w_mbytes_per_sec": 0 00:21:07.762 }, 00:21:07.762 "claimed": false, 00:21:07.762 "zoned": false, 00:21:07.762 "supported_io_types": { 00:21:07.762 "read": true, 00:21:07.762 "write": true, 00:21:07.762 "unmap": false, 00:21:07.762 "flush": true, 00:21:07.762 "reset": true, 00:21:07.762 "nvme_admin": true, 00:21:07.762 "nvme_io": true, 00:21:07.762 "nvme_io_md": false, 00:21:07.762 "write_zeroes": true, 00:21:07.762 "zcopy": false, 00:21:07.762 "get_zone_info": false, 00:21:07.762 "zone_management": false, 00:21:07.762 "zone_append": false, 00:21:07.762 "compare": true, 00:21:07.762 "compare_and_write": true, 00:21:07.762 "abort": true, 00:21:07.762 "seek_hole": false, 00:21:07.762 "seek_data": false, 00:21:07.762 "copy": true, 00:21:07.762 "nvme_iov_md": false 00:21:07.762 }, 00:21:07.762 "memory_domains": [ 00:21:07.762 { 00:21:07.762 "dma_device_id": "system", 00:21:07.762 "dma_device_type": 1 00:21:07.762 } 00:21:07.762 ], 00:21:07.762 "driver_specific": { 00:21:07.762 "nvme": [ 00:21:07.762 { 00:21:07.762 "trid": { 00:21:07.762 "trtype": "TCP", 00:21:07.762 "adrfam": "IPv4", 00:21:07.762 "traddr": "10.0.0.2", 00:21:07.762 "trsvcid": "4420", 00:21:07.762 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:07.762 }, 00:21:07.762 "ctrlr_data": { 00:21:07.762 "cntlid": 1, 00:21:07.762 "vendor_id": "0x8086", 00:21:07.762 "model_number": "SPDK bdev Controller", 00:21:07.762 "serial_number": "00000000000000000000", 00:21:07.762 "firmware_revision": "25.01", 00:21:07.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:07.762 "oacs": { 00:21:07.762 "security": 0, 00:21:07.762 "format": 0, 00:21:07.762 "firmware": 0, 00:21:07.762 "ns_manage": 0 00:21:07.762 }, 00:21:07.762 "multi_ctrlr": true, 00:21:07.762 "ana_reporting": false 00:21:07.762 }, 00:21:07.762 "vs": { 00:21:07.762 "nvme_version": "1.3" 00:21:07.762 }, 00:21:07.762 "ns_data": { 00:21:07.762 "id": 1, 00:21:07.762 "can_share": true 00:21:07.762 } 00:21:07.762 } 00:21:07.762 ], 00:21:07.762 "mp_policy": "active_passive" 00:21:07.762 } 00:21:07.762 } 00:21:07.762 ] 00:21:07.762 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.762 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:07.762 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.762 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.762 [2024-11-15 11:39:48.021897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:07.762 [2024-11-15 11:39:48.021969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c3b20 (9): Bad file descriptor 00:21:07.762 [2024-11-15 11:39:48.154420] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:07.762 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.762 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:07.762 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.762 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:07.762 [ 00:21:07.762 { 00:21:07.762 "name": "nvme0n1", 00:21:07.762 "aliases": [ 00:21:07.762 "80e36768-95fc-46c1-b55f-9552203f54d3" 00:21:07.762 ], 00:21:07.762 "product_name": "NVMe disk", 00:21:07.762 "block_size": 512, 00:21:07.762 "num_blocks": 2097152, 00:21:07.762 "uuid": "80e36768-95fc-46c1-b55f-9552203f54d3", 00:21:07.762 "numa_id": 0, 00:21:07.762 "assigned_rate_limits": { 00:21:07.762 "rw_ios_per_sec": 0, 00:21:07.762 "rw_mbytes_per_sec": 0, 00:21:07.762 "r_mbytes_per_sec": 0, 00:21:07.762 "w_mbytes_per_sec": 0 00:21:07.762 }, 00:21:07.762 "claimed": false, 00:21:07.762 "zoned": false, 00:21:07.762 "supported_io_types": { 00:21:07.762 "read": true, 00:21:07.762 "write": true, 00:21:07.762 "unmap": false, 00:21:07.762 "flush": true, 00:21:07.762 "reset": true, 00:21:07.762 "nvme_admin": true, 00:21:07.762 "nvme_io": true, 00:21:07.762 "nvme_io_md": false, 00:21:07.762 "write_zeroes": true, 00:21:07.762 "zcopy": false, 00:21:07.762 "get_zone_info": false, 00:21:07.762 "zone_management": false, 00:21:07.762 "zone_append": false, 00:21:07.762 "compare": true, 00:21:07.762 "compare_and_write": true, 00:21:07.762 "abort": true, 00:21:07.762 "seek_hole": false, 00:21:07.762 "seek_data": false, 00:21:07.762 "copy": true, 00:21:07.762 "nvme_iov_md": false 00:21:07.762 }, 00:21:07.762 "memory_domains": [ 00:21:07.762 { 00:21:07.762 "dma_device_id": "system", 00:21:07.762 "dma_device_type": 1 00:21:07.762 } 00:21:07.762 ], 00:21:07.762 "driver_specific": { 00:21:07.762 "nvme": [ 00:21:07.762 { 00:21:07.762 "trid": { 00:21:07.762 "trtype": "TCP", 00:21:07.762 "adrfam": "IPv4", 00:21:07.762 "traddr": "10.0.0.2", 00:21:07.762 "trsvcid": "4420", 00:21:07.762 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:07.762 }, 00:21:07.762 "ctrlr_data": { 00:21:07.762 "cntlid": 2, 00:21:07.762 "vendor_id": "0x8086", 00:21:07.762 "model_number": "SPDK bdev Controller", 00:21:07.762 "serial_number": "00000000000000000000", 00:21:07.762 "firmware_revision": "25.01", 00:21:07.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:07.762 "oacs": { 00:21:07.762 "security": 0, 00:21:07.762 "format": 0, 00:21:07.762 "firmware": 0, 00:21:07.762 "ns_manage": 0 00:21:07.762 }, 00:21:07.762 "multi_ctrlr": true, 00:21:07.762 "ana_reporting": false 00:21:07.762 }, 00:21:07.762 "vs": { 00:21:07.762 "nvme_version": "1.3" 00:21:07.762 }, 00:21:07.762 "ns_data": { 00:21:07.762 "id": 1, 00:21:07.762 "can_share": true 00:21:07.762 } 00:21:07.762 } 00:21:07.762 ], 00:21:07.762 "mp_policy": "active_passive" 00:21:07.762 } 00:21:07.762 } 00:21:07.762 ] 00:21:07.762 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.762 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.762 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.762 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.019 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.019 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:08.019 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.NZ2zCPYBTd 00:21:08.019 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:08.019 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.NZ2zCPYBTd 00:21:08.019 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.NZ2zCPYBTd 00:21:08.019 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.019 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.019 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.019 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:08.019 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.019 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.019 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.019 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:08.019 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.019 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.019 [2024-11-15 11:39:48.214541] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:08.020 [2024-11-15 11:39:48.214682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.020 [2024-11-15 11:39:48.230595] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:08.020 nvme0n1 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.020 [ 00:21:08.020 { 00:21:08.020 "name": "nvme0n1", 00:21:08.020 "aliases": [ 00:21:08.020 "80e36768-95fc-46c1-b55f-9552203f54d3" 00:21:08.020 ], 00:21:08.020 "product_name": "NVMe disk", 00:21:08.020 "block_size": 512, 00:21:08.020 "num_blocks": 2097152, 00:21:08.020 "uuid": "80e36768-95fc-46c1-b55f-9552203f54d3", 00:21:08.020 "numa_id": 0, 00:21:08.020 "assigned_rate_limits": { 00:21:08.020 "rw_ios_per_sec": 0, 00:21:08.020 "rw_mbytes_per_sec": 0, 00:21:08.020 "r_mbytes_per_sec": 0, 00:21:08.020 "w_mbytes_per_sec": 0 00:21:08.020 }, 00:21:08.020 "claimed": false, 00:21:08.020 "zoned": false, 00:21:08.020 "supported_io_types": { 00:21:08.020 "read": true, 00:21:08.020 "write": true, 00:21:08.020 "unmap": false, 00:21:08.020 "flush": true, 00:21:08.020 "reset": true, 00:21:08.020 "nvme_admin": true, 00:21:08.020 "nvme_io": true, 00:21:08.020 "nvme_io_md": false, 00:21:08.020 "write_zeroes": true, 00:21:08.020 "zcopy": false, 00:21:08.020 "get_zone_info": false, 00:21:08.020 "zone_management": false, 00:21:08.020 "zone_append": false, 00:21:08.020 "compare": true, 00:21:08.020 "compare_and_write": true, 00:21:08.020 "abort": true, 00:21:08.020 "seek_hole": false, 00:21:08.020 "seek_data": false, 00:21:08.020 "copy": true, 00:21:08.020 "nvme_iov_md": false 00:21:08.020 }, 00:21:08.020 "memory_domains": [ 00:21:08.020 { 00:21:08.020 "dma_device_id": "system", 00:21:08.020 "dma_device_type": 1 00:21:08.020 } 00:21:08.020 ], 00:21:08.020 "driver_specific": { 00:21:08.020 "nvme": [ 00:21:08.020 { 00:21:08.020 "trid": { 00:21:08.020 "trtype": "TCP", 00:21:08.020 "adrfam": "IPv4", 00:21:08.020 "traddr": "10.0.0.2", 00:21:08.020 "trsvcid": "4421", 00:21:08.020 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:08.020 }, 00:21:08.020 "ctrlr_data": { 00:21:08.020 "cntlid": 3, 00:21:08.020 "vendor_id": "0x8086", 00:21:08.020 "model_number": "SPDK bdev Controller", 00:21:08.020 "serial_number": "00000000000000000000", 00:21:08.020 "firmware_revision": "25.01", 00:21:08.020 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:08.020 "oacs": { 00:21:08.020 "security": 0, 00:21:08.020 "format": 0, 00:21:08.020 "firmware": 0, 00:21:08.020 "ns_manage": 0 00:21:08.020 }, 00:21:08.020 "multi_ctrlr": true, 00:21:08.020 "ana_reporting": false 00:21:08.020 }, 00:21:08.020 "vs": { 00:21:08.020 "nvme_version": "1.3" 00:21:08.020 }, 00:21:08.020 "ns_data": { 00:21:08.020 "id": 1, 00:21:08.020 "can_share": true 00:21:08.020 } 00:21:08.020 } 00:21:08.020 ], 00:21:08.020 "mp_policy": "active_passive" 00:21:08.020 } 00:21:08.020 } 00:21:08.020 ] 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.NZ2zCPYBTd 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:08.020 rmmod nvme_tcp 00:21:08.020 rmmod nvme_fabrics 00:21:08.020 rmmod nvme_keyring 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2985555 ']' 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2985555 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2985555 ']' 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2985555 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2985555 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2985555' 00:21:08.020 killing process with pid 2985555 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2985555 00:21:08.020 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2985555 00:21:08.277 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:08.277 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:08.277 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:08.277 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:08.277 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:08.277 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:08.277 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:08.277 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:08.277 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:08.277 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.277 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.277 11:39:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:10.818 00:21:10.818 real 0m5.691s 00:21:10.818 user 0m2.191s 00:21:10.818 sys 0m1.955s 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:10.818 ************************************ 00:21:10.818 END TEST nvmf_async_init 00:21:10.818 ************************************ 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.818 ************************************ 00:21:10.818 START TEST dma 00:21:10.818 ************************************ 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:10.818 * Looking for test storage... 00:21:10.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:10.818 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:10.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.819 --rc genhtml_branch_coverage=1 00:21:10.819 --rc genhtml_function_coverage=1 00:21:10.819 --rc genhtml_legend=1 00:21:10.819 --rc geninfo_all_blocks=1 00:21:10.819 --rc geninfo_unexecuted_blocks=1 00:21:10.819 00:21:10.819 ' 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:10.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.819 --rc genhtml_branch_coverage=1 00:21:10.819 --rc genhtml_function_coverage=1 00:21:10.819 --rc genhtml_legend=1 00:21:10.819 --rc geninfo_all_blocks=1 00:21:10.819 --rc geninfo_unexecuted_blocks=1 00:21:10.819 00:21:10.819 ' 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:10.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.819 --rc genhtml_branch_coverage=1 00:21:10.819 --rc genhtml_function_coverage=1 00:21:10.819 --rc genhtml_legend=1 00:21:10.819 --rc geninfo_all_blocks=1 00:21:10.819 --rc geninfo_unexecuted_blocks=1 00:21:10.819 00:21:10.819 ' 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:10.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.819 --rc genhtml_branch_coverage=1 00:21:10.819 --rc genhtml_function_coverage=1 00:21:10.819 --rc genhtml_legend=1 00:21:10.819 --rc geninfo_all_blocks=1 00:21:10.819 --rc geninfo_unexecuted_blocks=1 00:21:10.819 00:21:10.819 ' 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:10.819 00:21:10.819 real 0m0.156s 00:21:10.819 user 0m0.100s 00:21:10.819 sys 0m0.065s 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:10.819 ************************************ 00:21:10.819 END TEST dma 00:21:10.819 ************************************ 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.819 ************************************ 00:21:10.819 START TEST nvmf_identify 00:21:10.819 ************************************ 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:10.819 * Looking for test storage... 00:21:10.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:10.819 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:10.820 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:21:10.820 11:39:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:10.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.820 --rc genhtml_branch_coverage=1 00:21:10.820 --rc genhtml_function_coverage=1 00:21:10.820 --rc genhtml_legend=1 00:21:10.820 --rc geninfo_all_blocks=1 00:21:10.820 --rc geninfo_unexecuted_blocks=1 00:21:10.820 00:21:10.820 ' 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:10.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.820 --rc genhtml_branch_coverage=1 00:21:10.820 --rc genhtml_function_coverage=1 00:21:10.820 --rc genhtml_legend=1 00:21:10.820 --rc geninfo_all_blocks=1 00:21:10.820 --rc geninfo_unexecuted_blocks=1 00:21:10.820 00:21:10.820 ' 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:10.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.820 --rc genhtml_branch_coverage=1 00:21:10.820 --rc genhtml_function_coverage=1 00:21:10.820 --rc genhtml_legend=1 00:21:10.820 --rc geninfo_all_blocks=1 00:21:10.820 --rc geninfo_unexecuted_blocks=1 00:21:10.820 00:21:10.820 ' 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:10.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.820 --rc genhtml_branch_coverage=1 00:21:10.820 --rc genhtml_function_coverage=1 00:21:10.820 --rc genhtml_legend=1 00:21:10.820 --rc geninfo_all_blocks=1 00:21:10.820 --rc geninfo_unexecuted_blocks=1 00:21:10.820 00:21:10.820 ' 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.820 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:10.821 11:39:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:12.724 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:12.724 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.724 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:12.983 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:12.983 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.983 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:12.983 Found net devices under 0000:09:00.0: cvl_0_0 00:21:12.983 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.983 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:12.983 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.983 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:12.983 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.983 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:12.983 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:12.983 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.983 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:12.983 Found net devices under 0000:09:00.1: cvl_0_1 00:21:12.983 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.983 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:12.983 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:12.983 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:12.983 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:12.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:21:12.984 00:21:12.984 --- 10.0.0.2 ping statistics --- 00:21:12.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.984 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:21:12.984 00:21:12.984 --- 10.0.0.1 ping statistics --- 00:21:12.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.984 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2987813 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2987813 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2987813 ']' 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.984 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.984 [2024-11-15 11:39:53.369482] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:21:12.984 [2024-11-15 11:39:53.369589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.242 [2024-11-15 11:39:53.441763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:13.242 [2024-11-15 11:39:53.499946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.242 [2024-11-15 11:39:53.499998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.242 [2024-11-15 11:39:53.500026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.242 [2024-11-15 11:39:53.500037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.242 [2024-11-15 11:39:53.500047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.242 [2024-11-15 11:39:53.501715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.242 [2024-11-15 11:39:53.501804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.242 [2024-11-15 11:39:53.501900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:13.242 [2024-11-15 11:39:53.501907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.242 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.242 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:13.242 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:13.242 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.242 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:13.242 [2024-11-15 11:39:53.626315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.242 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.242 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:13.242 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:13.242 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:13.242 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:13.242 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.242 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:13.506 Malloc0 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:13.506 [2024-11-15 11:39:53.723175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:13.506 [ 00:21:13.506 { 00:21:13.506 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:13.506 "subtype": "Discovery", 00:21:13.506 "listen_addresses": [ 00:21:13.506 { 00:21:13.506 "trtype": "TCP", 00:21:13.506 "adrfam": "IPv4", 00:21:13.506 "traddr": "10.0.0.2", 00:21:13.506 "trsvcid": "4420" 00:21:13.506 } 00:21:13.506 ], 00:21:13.506 "allow_any_host": true, 00:21:13.506 "hosts": [] 00:21:13.506 }, 00:21:13.506 { 00:21:13.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.506 "subtype": "NVMe", 00:21:13.506 "listen_addresses": [ 00:21:13.506 { 00:21:13.506 "trtype": "TCP", 00:21:13.506 "adrfam": "IPv4", 00:21:13.506 "traddr": "10.0.0.2", 00:21:13.506 "trsvcid": "4420" 00:21:13.506 } 00:21:13.506 ], 00:21:13.506 "allow_any_host": true, 00:21:13.506 "hosts": [], 00:21:13.506 "serial_number": "SPDK00000000000001", 00:21:13.506 "model_number": "SPDK bdev Controller", 00:21:13.506 "max_namespaces": 32, 00:21:13.506 "min_cntlid": 1, 00:21:13.506 "max_cntlid": 65519, 00:21:13.506 "namespaces": [ 00:21:13.506 { 00:21:13.506 "nsid": 1, 00:21:13.506 "bdev_name": "Malloc0", 00:21:13.506 "name": "Malloc0", 00:21:13.506 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:13.506 "eui64": "ABCDEF0123456789", 00:21:13.506 "uuid": "5755ae52-a9bb-4f59-9052-5d331b0fa074" 00:21:13.506 } 00:21:13.506 ] 00:21:13.506 } 00:21:13.506 ] 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.506 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:13.506 [2024-11-15 11:39:53.765877] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:21:13.506 [2024-11-15 11:39:53.765920] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2987835 ] 00:21:13.506 [2024-11-15 11:39:53.818675] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:13.506 [2024-11-15 11:39:53.818741] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:13.506 [2024-11-15 11:39:53.818752] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:13.506 [2024-11-15 11:39:53.818767] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:13.506 [2024-11-15 11:39:53.818784] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:13.506 [2024-11-15 11:39:53.822795] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:13.506 [2024-11-15 11:39:53.822857] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x52f690 0 00:21:13.506 [2024-11-15 11:39:53.830335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:13.506 [2024-11-15 11:39:53.830357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:13.506 [2024-11-15 11:39:53.830366] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:13.506 [2024-11-15 11:39:53.830372] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:13.506 [2024-11-15 11:39:53.830419] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.506 [2024-11-15 11:39:53.830432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.506 [2024-11-15 11:39:53.830440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x52f690) 00:21:13.506 [2024-11-15 11:39:53.830458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:13.506 [2024-11-15 11:39:53.830486] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591100, cid 0, qid 0 00:21:13.506 [2024-11-15 11:39:53.837315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.507 [2024-11-15 11:39:53.837334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.507 [2024-11-15 11:39:53.837342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.837350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591100) on tqpair=0x52f690 00:21:13.507 [2024-11-15 11:39:53.837371] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:13.507 [2024-11-15 11:39:53.837390] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:13.507 [2024-11-15 11:39:53.837400] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:13.507 [2024-11-15 11:39:53.837422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.837432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.837438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x52f690) 00:21:13.507 [2024-11-15 11:39:53.837450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.507 [2024-11-15 11:39:53.837474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591100, cid 0, qid 0 00:21:13.507 [2024-11-15 11:39:53.837580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.507 [2024-11-15 11:39:53.837593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.507 [2024-11-15 11:39:53.837600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.837608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591100) on tqpair=0x52f690 00:21:13.507 [2024-11-15 11:39:53.837617] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:13.507 [2024-11-15 11:39:53.837629] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:13.507 [2024-11-15 11:39:53.837642] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.837650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.837657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x52f690) 00:21:13.507 [2024-11-15 11:39:53.837667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.507 [2024-11-15 11:39:53.837689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591100, cid 0, qid 0 00:21:13.507 [2024-11-15 11:39:53.837780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.507 [2024-11-15 11:39:53.837794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.507 [2024-11-15 11:39:53.837802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.837809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591100) on tqpair=0x52f690 00:21:13.507 [2024-11-15 11:39:53.837818] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:13.507 [2024-11-15 11:39:53.837832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:13.507 [2024-11-15 11:39:53.837844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.837852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.837859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x52f690) 00:21:13.507 [2024-11-15 11:39:53.837870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.507 [2024-11-15 11:39:53.837891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591100, cid 0, qid 0 00:21:13.507 [2024-11-15 11:39:53.837975] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.507 [2024-11-15 11:39:53.837989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.507 [2024-11-15 11:39:53.837997] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.838004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591100) on tqpair=0x52f690 00:21:13.507 [2024-11-15 11:39:53.838012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:13.507 [2024-11-15 11:39:53.838034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.838044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.838051] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x52f690) 00:21:13.507 [2024-11-15 11:39:53.838061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.507 [2024-11-15 11:39:53.838082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591100, cid 0, qid 0 00:21:13.507 [2024-11-15 11:39:53.838165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.507 [2024-11-15 11:39:53.838177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.507 [2024-11-15 11:39:53.838184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.838191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591100) on tqpair=0x52f690 00:21:13.507 [2024-11-15 11:39:53.838199] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:13.507 [2024-11-15 11:39:53.838207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:13.507 [2024-11-15 11:39:53.838220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:13.507 [2024-11-15 11:39:53.838331] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:13.507 [2024-11-15 11:39:53.838341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:13.507 [2024-11-15 11:39:53.838356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.838364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.838370] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x52f690) 00:21:13.507 [2024-11-15 11:39:53.838381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.507 [2024-11-15 11:39:53.838403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591100, cid 0, qid 0 00:21:13.507 [2024-11-15 11:39:53.838500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.507 [2024-11-15 11:39:53.838514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.507 [2024-11-15 11:39:53.838522] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.838529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591100) on tqpair=0x52f690 00:21:13.507 [2024-11-15 11:39:53.838537] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:13.507 [2024-11-15 11:39:53.838553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.838563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.507 [2024-11-15 11:39:53.838569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x52f690) 00:21:13.507 [2024-11-15 11:39:53.838580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.507 [2024-11-15 11:39:53.838601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591100, cid 0, qid 0 00:21:13.507 [2024-11-15 11:39:53.838683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.507 [2024-11-15 11:39:53.838697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.507 [2024-11-15 11:39:53.838704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.838711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591100) on tqpair=0x52f690 00:21:13.508 [2024-11-15 11:39:53.838726] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:13.508 [2024-11-15 11:39:53.838735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:13.508 [2024-11-15 11:39:53.838749] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:13.508 [2024-11-15 11:39:53.838764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:13.508 [2024-11-15 11:39:53.838781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.838789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x52f690) 00:21:13.508 [2024-11-15 11:39:53.838800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.508 [2024-11-15 11:39:53.838822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591100, cid 0, qid 0 00:21:13.508 [2024-11-15 11:39:53.838959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.508 [2024-11-15 11:39:53.838974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.508 [2024-11-15 11:39:53.838982] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.838989] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x52f690): datao=0, datal=4096, cccid=0 00:21:13.508 [2024-11-15 11:39:53.838997] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x591100) on tqpair(0x52f690): expected_datao=0, payload_size=4096 00:21:13.508 [2024-11-15 11:39:53.839005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.839016] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.839024] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.839037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.508 [2024-11-15 11:39:53.839047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.508 [2024-11-15 11:39:53.839055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.839061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591100) on tqpair=0x52f690 00:21:13.508 [2024-11-15 11:39:53.839073] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:13.508 [2024-11-15 11:39:53.839082] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:13.508 [2024-11-15 11:39:53.839089] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:13.508 [2024-11-15 11:39:53.839103] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:13.508 [2024-11-15 11:39:53.839112] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:13.508 [2024-11-15 11:39:53.839121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:13.508 [2024-11-15 11:39:53.839139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:13.508 [2024-11-15 11:39:53.839153] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.839161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.839168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x52f690) 00:21:13.508 [2024-11-15 11:39:53.839179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.508 [2024-11-15 11:39:53.839204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591100, cid 0, qid 0 00:21:13.508 [2024-11-15 11:39:53.839300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.508 [2024-11-15 11:39:53.839326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.508 [2024-11-15 11:39:53.839334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.839341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591100) on tqpair=0x52f690 00:21:13.508 [2024-11-15 11:39:53.839353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.839361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.839367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x52f690) 00:21:13.508 [2024-11-15 11:39:53.839378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.508 [2024-11-15 11:39:53.839388] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.839395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.839402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x52f690) 00:21:13.508 [2024-11-15 11:39:53.839410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.508 [2024-11-15 11:39:53.839420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.839427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.839434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x52f690) 00:21:13.508 [2024-11-15 11:39:53.839442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.508 [2024-11-15 11:39:53.839452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.839460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.839466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.508 [2024-11-15 11:39:53.839475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.508 [2024-11-15 11:39:53.839484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:13.508 [2024-11-15 11:39:53.839499] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:13.508 [2024-11-15 11:39:53.839511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.839518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x52f690) 00:21:13.508 [2024-11-15 11:39:53.839528] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.508 [2024-11-15 11:39:53.839550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591100, cid 0, qid 0 00:21:13.508 [2024-11-15 11:39:53.839562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591280, cid 1, qid 0 00:21:13.508 [2024-11-15 11:39:53.839570] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591400, cid 2, qid 0 00:21:13.508 [2024-11-15 11:39:53.839577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.508 [2024-11-15 11:39:53.839584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591700, cid 4, qid 0 00:21:13.508 [2024-11-15 11:39:53.839699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.508 [2024-11-15 11:39:53.839713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.508 [2024-11-15 11:39:53.839721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.508 [2024-11-15 11:39:53.839732] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591700) on tqpair=0x52f690 00:21:13.509 [2024-11-15 11:39:53.839746] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:13.509 [2024-11-15 11:39:53.839756] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:13.509 [2024-11-15 11:39:53.839774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.839784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x52f690) 00:21:13.509 [2024-11-15 11:39:53.839795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.509 [2024-11-15 11:39:53.839816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591700, cid 4, qid 0 00:21:13.509 [2024-11-15 11:39:53.839910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.509 [2024-11-15 11:39:53.839924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.509 [2024-11-15 11:39:53.839932] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.839938] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x52f690): datao=0, datal=4096, cccid=4 00:21:13.509 [2024-11-15 11:39:53.839946] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x591700) on tqpair(0x52f690): expected_datao=0, payload_size=4096 00:21:13.509 [2024-11-15 11:39:53.839954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.839970] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.839980] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.839992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.509 [2024-11-15 11:39:53.840002] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.509 [2024-11-15 11:39:53.840009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.840016] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591700) on tqpair=0x52f690 00:21:13.509 [2024-11-15 11:39:53.840035] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:13.509 [2024-11-15 11:39:53.840070] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.840081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x52f690) 00:21:13.509 [2024-11-15 11:39:53.840092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.509 [2024-11-15 11:39:53.840104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.840112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.840119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x52f690) 00:21:13.509 [2024-11-15 11:39:53.840128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.509 [2024-11-15 11:39:53.840155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591700, cid 4, qid 0 00:21:13.509 [2024-11-15 11:39:53.840167] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591880, cid 5, qid 0 00:21:13.509 [2024-11-15 11:39:53.840315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.509 [2024-11-15 11:39:53.840329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.509 [2024-11-15 11:39:53.840336] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.840343] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x52f690): datao=0, datal=1024, cccid=4 00:21:13.509 [2024-11-15 11:39:53.840351] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x591700) on tqpair(0x52f690): expected_datao=0, payload_size=1024 00:21:13.509 [2024-11-15 11:39:53.840362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.840373] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.840381] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.840390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.509 [2024-11-15 11:39:53.840400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.509 [2024-11-15 11:39:53.840407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.840414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591880) on tqpair=0x52f690 00:21:13.509 [2024-11-15 11:39:53.880374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.509 [2024-11-15 11:39:53.880393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.509 [2024-11-15 11:39:53.880401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.880409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591700) on tqpair=0x52f690 00:21:13.509 [2024-11-15 11:39:53.880427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.880437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x52f690) 00:21:13.509 [2024-11-15 11:39:53.880449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.509 [2024-11-15 11:39:53.880479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591700, cid 4, qid 0 00:21:13.509 [2024-11-15 11:39:53.880588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.509 [2024-11-15 11:39:53.880603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.509 [2024-11-15 11:39:53.880610] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.880617] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x52f690): datao=0, datal=3072, cccid=4 00:21:13.509 [2024-11-15 11:39:53.880625] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x591700) on tqpair(0x52f690): expected_datao=0, payload_size=3072 00:21:13.509 [2024-11-15 11:39:53.880632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.880643] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.880651] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.880663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.509 [2024-11-15 11:39:53.880674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.509 [2024-11-15 11:39:53.880681] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.880688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591700) on tqpair=0x52f690 00:21:13.509 [2024-11-15 11:39:53.880703] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.880712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x52f690) 00:21:13.509 [2024-11-15 11:39:53.880723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.509 [2024-11-15 11:39:53.880751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591700, cid 4, qid 0 00:21:13.509 [2024-11-15 11:39:53.880860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.509 [2024-11-15 11:39:53.880875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.509 [2024-11-15 11:39:53.880882] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.880889] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x52f690): datao=0, datal=8, cccid=4 00:21:13.509 [2024-11-15 11:39:53.880896] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x591700) on tqpair(0x52f690): expected_datao=0, payload_size=8 00:21:13.509 [2024-11-15 11:39:53.880904] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.880921] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.509 [2024-11-15 11:39:53.880930] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.510 [2024-11-15 11:39:53.925325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.510 [2024-11-15 11:39:53.925345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.510 [2024-11-15 11:39:53.925353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.510 [2024-11-15 11:39:53.925360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591700) on tqpair=0x52f690 00:21:13.510 ===================================================== 00:21:13.510 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:13.510 ===================================================== 00:21:13.510 Controller Capabilities/Features 00:21:13.510 ================================ 00:21:13.510 Vendor ID: 0000 00:21:13.510 Subsystem Vendor ID: 0000 00:21:13.510 Serial Number: .................... 00:21:13.510 Model Number: ........................................ 00:21:13.510 Firmware Version: 25.01 00:21:13.510 Recommended Arb Burst: 0 00:21:13.510 IEEE OUI Identifier: 00 00 00 00:21:13.510 Multi-path I/O 00:21:13.510 May have multiple subsystem ports: No 00:21:13.510 May have multiple controllers: No 00:21:13.510 Associated with SR-IOV VF: No 00:21:13.510 Max Data Transfer Size: 131072 00:21:13.510 Max Number of Namespaces: 0 00:21:13.510 Max Number of I/O Queues: 1024 00:21:13.510 NVMe Specification Version (VS): 1.3 00:21:13.510 NVMe Specification Version (Identify): 1.3 00:21:13.510 Maximum Queue Entries: 128 00:21:13.510 Contiguous Queues Required: Yes 00:21:13.510 Arbitration Mechanisms Supported 00:21:13.510 Weighted Round Robin: Not Supported 00:21:13.510 Vendor Specific: Not Supported 00:21:13.510 Reset Timeout: 15000 ms 00:21:13.510 Doorbell Stride: 4 bytes 00:21:13.510 NVM Subsystem Reset: Not Supported 00:21:13.510 Command Sets Supported 00:21:13.510 NVM Command Set: Supported 00:21:13.510 Boot Partition: Not Supported 00:21:13.510 Memory Page Size Minimum: 4096 bytes 00:21:13.510 Memory Page Size Maximum: 4096 bytes 00:21:13.510 Persistent Memory Region: Not Supported 00:21:13.510 Optional Asynchronous Events Supported 00:21:13.510 Namespace Attribute Notices: Not Supported 00:21:13.510 Firmware Activation Notices: Not Supported 00:21:13.510 ANA Change Notices: Not Supported 00:21:13.510 PLE Aggregate Log Change Notices: Not Supported 00:21:13.510 LBA Status Info Alert Notices: Not Supported 00:21:13.510 EGE Aggregate Log Change Notices: Not Supported 00:21:13.510 Normal NVM Subsystem Shutdown event: Not Supported 00:21:13.510 Zone Descriptor Change Notices: Not Supported 00:21:13.510 Discovery Log Change Notices: Supported 00:21:13.510 Controller Attributes 00:21:13.510 128-bit Host Identifier: Not Supported 00:21:13.510 Non-Operational Permissive Mode: Not Supported 00:21:13.510 NVM Sets: Not Supported 00:21:13.510 Read Recovery Levels: Not Supported 00:21:13.510 Endurance Groups: Not Supported 00:21:13.510 Predictable Latency Mode: Not Supported 00:21:13.510 Traffic Based Keep ALive: Not Supported 00:21:13.510 Namespace Granularity: Not Supported 00:21:13.510 SQ Associations: Not Supported 00:21:13.510 UUID List: Not Supported 00:21:13.510 Multi-Domain Subsystem: Not Supported 00:21:13.510 Fixed Capacity Management: Not Supported 00:21:13.510 Variable Capacity Management: Not Supported 00:21:13.510 Delete Endurance Group: Not Supported 00:21:13.510 Delete NVM Set: Not Supported 00:21:13.510 Extended LBA Formats Supported: Not Supported 00:21:13.510 Flexible Data Placement Supported: Not Supported 00:21:13.510 00:21:13.510 Controller Memory Buffer Support 00:21:13.510 ================================ 00:21:13.510 Supported: No 00:21:13.510 00:21:13.510 Persistent Memory Region Support 00:21:13.510 ================================ 00:21:13.510 Supported: No 00:21:13.510 00:21:13.510 Admin Command Set Attributes 00:21:13.510 ============================ 00:21:13.510 Security Send/Receive: Not Supported 00:21:13.510 Format NVM: Not Supported 00:21:13.510 Firmware Activate/Download: Not Supported 00:21:13.510 Namespace Management: Not Supported 00:21:13.510 Device Self-Test: Not Supported 00:21:13.510 Directives: Not Supported 00:21:13.510 NVMe-MI: Not Supported 00:21:13.510 Virtualization Management: Not Supported 00:21:13.510 Doorbell Buffer Config: Not Supported 00:21:13.510 Get LBA Status Capability: Not Supported 00:21:13.510 Command & Feature Lockdown Capability: Not Supported 00:21:13.510 Abort Command Limit: 1 00:21:13.510 Async Event Request Limit: 4 00:21:13.510 Number of Firmware Slots: N/A 00:21:13.510 Firmware Slot 1 Read-Only: N/A 00:21:13.510 Firmware Activation Without Reset: N/A 00:21:13.510 Multiple Update Detection Support: N/A 00:21:13.510 Firmware Update Granularity: No Information Provided 00:21:13.510 Per-Namespace SMART Log: No 00:21:13.510 Asymmetric Namespace Access Log Page: Not Supported 00:21:13.510 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:13.510 Command Effects Log Page: Not Supported 00:21:13.510 Get Log Page Extended Data: Supported 00:21:13.510 Telemetry Log Pages: Not Supported 00:21:13.510 Persistent Event Log Pages: Not Supported 00:21:13.510 Supported Log Pages Log Page: May Support 00:21:13.510 Commands Supported & Effects Log Page: Not Supported 00:21:13.510 Feature Identifiers & Effects Log Page:May Support 00:21:13.510 NVMe-MI Commands & Effects Log Page: May Support 00:21:13.510 Data Area 4 for Telemetry Log: Not Supported 00:21:13.510 Error Log Page Entries Supported: 128 00:21:13.510 Keep Alive: Not Supported 00:21:13.510 00:21:13.510 NVM Command Set Attributes 00:21:13.510 ========================== 00:21:13.510 Submission Queue Entry Size 00:21:13.510 Max: 1 00:21:13.510 Min: 1 00:21:13.510 Completion Queue Entry Size 00:21:13.510 Max: 1 00:21:13.510 Min: 1 00:21:13.510 Number of Namespaces: 0 00:21:13.511 Compare Command: Not Supported 00:21:13.511 Write Uncorrectable Command: Not Supported 00:21:13.511 Dataset Management Command: Not Supported 00:21:13.511 Write Zeroes Command: Not Supported 00:21:13.511 Set Features Save Field: Not Supported 00:21:13.511 Reservations: Not Supported 00:21:13.511 Timestamp: Not Supported 00:21:13.511 Copy: Not Supported 00:21:13.511 Volatile Write Cache: Not Present 00:21:13.511 Atomic Write Unit (Normal): 1 00:21:13.511 Atomic Write Unit (PFail): 1 00:21:13.511 Atomic Compare & Write Unit: 1 00:21:13.511 Fused Compare & Write: Supported 00:21:13.511 Scatter-Gather List 00:21:13.511 SGL Command Set: Supported 00:21:13.511 SGL Keyed: Supported 00:21:13.511 SGL Bit Bucket Descriptor: Not Supported 00:21:13.511 SGL Metadata Pointer: Not Supported 00:21:13.511 Oversized SGL: Not Supported 00:21:13.511 SGL Metadata Address: Not Supported 00:21:13.511 SGL Offset: Supported 00:21:13.511 Transport SGL Data Block: Not Supported 00:21:13.511 Replay Protected Memory Block: Not Supported 00:21:13.511 00:21:13.511 Firmware Slot Information 00:21:13.511 ========================= 00:21:13.511 Active slot: 0 00:21:13.511 00:21:13.511 00:21:13.511 Error Log 00:21:13.511 ========= 00:21:13.511 00:21:13.511 Active Namespaces 00:21:13.511 ================= 00:21:13.511 Discovery Log Page 00:21:13.511 ================== 00:21:13.511 Generation Counter: 2 00:21:13.511 Number of Records: 2 00:21:13.511 Record Format: 0 00:21:13.511 00:21:13.511 Discovery Log Entry 0 00:21:13.511 ---------------------- 00:21:13.511 Transport Type: 3 (TCP) 00:21:13.511 Address Family: 1 (IPv4) 00:21:13.511 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:13.511 Entry Flags: 00:21:13.511 Duplicate Returned Information: 1 00:21:13.511 Explicit Persistent Connection Support for Discovery: 1 00:21:13.511 Transport Requirements: 00:21:13.511 Secure Channel: Not Required 00:21:13.511 Port ID: 0 (0x0000) 00:21:13.511 Controller ID: 65535 (0xffff) 00:21:13.511 Admin Max SQ Size: 128 00:21:13.511 Transport Service Identifier: 4420 00:21:13.511 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:13.511 Transport Address: 10.0.0.2 00:21:13.511 Discovery Log Entry 1 00:21:13.511 ---------------------- 00:21:13.511 Transport Type: 3 (TCP) 00:21:13.511 Address Family: 1 (IPv4) 00:21:13.511 Subsystem Type: 2 (NVM Subsystem) 00:21:13.511 Entry Flags: 00:21:13.511 Duplicate Returned Information: 0 00:21:13.511 Explicit Persistent Connection Support for Discovery: 0 00:21:13.511 Transport Requirements: 00:21:13.511 Secure Channel: Not Required 00:21:13.511 Port ID: 0 (0x0000) 00:21:13.511 Controller ID: 65535 (0xffff) 00:21:13.511 Admin Max SQ Size: 128 00:21:13.511 Transport Service Identifier: 4420 00:21:13.511 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:13.511 Transport Address: 10.0.0.2 [2024-11-15 11:39:53.925476] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:13.511 [2024-11-15 11:39:53.925499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591100) on tqpair=0x52f690 00:21:13.511 [2024-11-15 11:39:53.925511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.511 [2024-11-15 11:39:53.925521] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591280) on tqpair=0x52f690 00:21:13.511 [2024-11-15 11:39:53.925529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.511 [2024-11-15 11:39:53.925537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591400) on tqpair=0x52f690 00:21:13.511 [2024-11-15 11:39:53.925545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.511 [2024-11-15 11:39:53.925553] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.511 [2024-11-15 11:39:53.925561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.511 [2024-11-15 11:39:53.925579] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.511 [2024-11-15 11:39:53.925589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.511 [2024-11-15 11:39:53.925595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.511 [2024-11-15 11:39:53.925607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.511 [2024-11-15 11:39:53.925647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.511 [2024-11-15 11:39:53.925738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.511 [2024-11-15 11:39:53.925753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.511 [2024-11-15 11:39:53.925761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.511 [2024-11-15 11:39:53.925768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.511 [2024-11-15 11:39:53.925780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.511 [2024-11-15 11:39:53.925788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.511 [2024-11-15 11:39:53.925795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.511 [2024-11-15 11:39:53.925806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.511 [2024-11-15 11:39:53.925834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.511 [2024-11-15 11:39:53.925945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.511 [2024-11-15 11:39:53.925958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.511 [2024-11-15 11:39:53.925966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.511 [2024-11-15 11:39:53.925973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.511 [2024-11-15 11:39:53.925981] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:13.511 [2024-11-15 11:39:53.925989] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:13.511 [2024-11-15 11:39:53.926009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.511 [2024-11-15 11:39:53.926019] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.512 [2024-11-15 11:39:53.926026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.512 [2024-11-15 11:39:53.926036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.512 [2024-11-15 11:39:53.926058] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.512 [2024-11-15 11:39:53.926144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.512 [2024-11-15 11:39:53.926156] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.512 [2024-11-15 11:39:53.926164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.512 [2024-11-15 11:39:53.926171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.512 [2024-11-15 11:39:53.926187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.512 [2024-11-15 11:39:53.926197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.512 [2024-11-15 11:39:53.926203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.512 [2024-11-15 11:39:53.926214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.512 [2024-11-15 11:39:53.926234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.512 [2024-11-15 11:39:53.926326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.512 [2024-11-15 11:39:53.926341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.512 [2024-11-15 11:39:53.926348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.512 [2024-11-15 11:39:53.926355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.512 [2024-11-15 11:39:53.926372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.512 [2024-11-15 11:39:53.926381] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.512 [2024-11-15 11:39:53.926388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.512 [2024-11-15 11:39:53.926399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.512 [2024-11-15 11:39:53.926420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.512 [2024-11-15 11:39:53.926501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.512 [2024-11-15 11:39:53.926513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.512 [2024-11-15 11:39:53.926520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.512 [2024-11-15 11:39:53.926527] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.512 [2024-11-15 11:39:53.926543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.512 [2024-11-15 11:39:53.926553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.512 [2024-11-15 11:39:53.926559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.512 [2024-11-15 11:39:53.926570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.512 [2024-11-15 11:39:53.926591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.512 [2024-11-15 11:39:53.926674] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.512 [2024-11-15 11:39:53.926686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.512 [2024-11-15 11:39:53.926694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.512 [2024-11-15 11:39:53.926701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.512 [2024-11-15 11:39:53.926716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.512 [2024-11-15 11:39:53.926730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.512 [2024-11-15 11:39:53.926738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.512 [2024-11-15 11:39:53.926748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.512 [2024-11-15 11:39:53.926769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.512 [2024-11-15 11:39:53.926852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.512 [2024-11-15 11:39:53.926865] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.512 [2024-11-15 11:39:53.926872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.512 [2024-11-15 11:39:53.926879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.512 [2024-11-15 11:39:53.926896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.512 [2024-11-15 11:39:53.926905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.512 [2024-11-15 11:39:53.926912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.512 [2024-11-15 11:39:53.926922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.512 [2024-11-15 11:39:53.926943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.842 [2024-11-15 11:39:53.927028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.842 [2024-11-15 11:39:53.927042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.842 [2024-11-15 11:39:53.927050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.842 [2024-11-15 11:39:53.927057] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.842 [2024-11-15 11:39:53.927073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.842 [2024-11-15 11:39:53.927083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.842 [2024-11-15 11:39:53.927089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.842 [2024-11-15 11:39:53.927100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.843 [2024-11-15 11:39:53.927120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.843 [2024-11-15 11:39:53.927204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.843 [2024-11-15 11:39:53.927216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.843 [2024-11-15 11:39:53.927224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.927231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.843 [2024-11-15 11:39:53.927247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.927256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.927263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.843 [2024-11-15 11:39:53.927274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.843 [2024-11-15 11:39:53.927295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.843 [2024-11-15 11:39:53.927404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.843 [2024-11-15 11:39:53.927419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.843 [2024-11-15 11:39:53.927427] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.927433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.843 [2024-11-15 11:39:53.927449] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.927459] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.927470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.843 [2024-11-15 11:39:53.927481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.843 [2024-11-15 11:39:53.927502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.843 [2024-11-15 11:39:53.927588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.843 [2024-11-15 11:39:53.927602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.843 [2024-11-15 11:39:53.927610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.927617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.843 [2024-11-15 11:39:53.927633] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.927643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.927649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.843 [2024-11-15 11:39:53.927660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.843 [2024-11-15 11:39:53.927681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.843 [2024-11-15 11:39:53.927763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.843 [2024-11-15 11:39:53.927776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.843 [2024-11-15 11:39:53.927783] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.927790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.843 [2024-11-15 11:39:53.927806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.927815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.927822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.843 [2024-11-15 11:39:53.927832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.843 [2024-11-15 11:39:53.927853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.843 [2024-11-15 11:39:53.927935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.843 [2024-11-15 11:39:53.927948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.843 [2024-11-15 11:39:53.927956] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.927963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.843 [2024-11-15 11:39:53.927979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.927988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.927995] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.843 [2024-11-15 11:39:53.928006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.843 [2024-11-15 11:39:53.928027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.843 [2024-11-15 11:39:53.928108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.843 [2024-11-15 11:39:53.928122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.843 [2024-11-15 11:39:53.928130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.928137] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.843 [2024-11-15 11:39:53.928153] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.928163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.928169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.843 [2024-11-15 11:39:53.928184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.843 [2024-11-15 11:39:53.928205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.843 [2024-11-15 11:39:53.928288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.843 [2024-11-15 11:39:53.928312] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.843 [2024-11-15 11:39:53.928321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.928328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.843 [2024-11-15 11:39:53.928345] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.928355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.928362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.843 [2024-11-15 11:39:53.928372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.843 [2024-11-15 11:39:53.928394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.843 [2024-11-15 11:39:53.928486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.843 [2024-11-15 11:39:53.928499] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.843 [2024-11-15 11:39:53.928507] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.928514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.843 [2024-11-15 11:39:53.928529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.928538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.928545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.843 [2024-11-15 11:39:53.928555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.843 [2024-11-15 11:39:53.928575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.843 [2024-11-15 11:39:53.928660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.843 [2024-11-15 11:39:53.928672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.843 [2024-11-15 11:39:53.928680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.928687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.843 [2024-11-15 11:39:53.928703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.928712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.928719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.843 [2024-11-15 11:39:53.928730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.843 [2024-11-15 11:39:53.928750] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.843 [2024-11-15 11:39:53.928835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.843 [2024-11-15 11:39:53.928849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.843 [2024-11-15 11:39:53.928857] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.928864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.843 [2024-11-15 11:39:53.928880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.928890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.928896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.843 [2024-11-15 11:39:53.928907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.843 [2024-11-15 11:39:53.928932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.843 [2024-11-15 11:39:53.929018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.843 [2024-11-15 11:39:53.929031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.843 [2024-11-15 11:39:53.929038] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.929045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.843 [2024-11-15 11:39:53.929061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.929071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.929077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.843 [2024-11-15 11:39:53.929088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.843 [2024-11-15 11:39:53.929109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.843 [2024-11-15 11:39:53.929188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.843 [2024-11-15 11:39:53.929200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.843 [2024-11-15 11:39:53.929208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.929215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.843 [2024-11-15 11:39:53.929230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.929240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.843 [2024-11-15 11:39:53.929247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.844 [2024-11-15 11:39:53.929257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.844 [2024-11-15 11:39:53.929277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.844 [2024-11-15 11:39:53.933318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.844 [2024-11-15 11:39:53.933334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.844 [2024-11-15 11:39:53.933342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:53.933349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.844 [2024-11-15 11:39:53.933367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:53.933377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:53.933384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x52f690) 00:21:13.844 [2024-11-15 11:39:53.933395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.844 [2024-11-15 11:39:53.933418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x591580, cid 3, qid 0 00:21:13.844 [2024-11-15 11:39:53.933506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.844 [2024-11-15 11:39:53.933518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.844 [2024-11-15 11:39:53.933526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:53.933532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x591580) on tqpair=0x52f690 00:21:13.844 [2024-11-15 11:39:53.933545] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:21:13.844 00:21:13.844 11:39:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:13.844 [2024-11-15 11:39:53.969560] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:21:13.844 [2024-11-15 11:39:53.969616] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2987844 ] 00:21:13.844 [2024-11-15 11:39:54.021313] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:13.844 [2024-11-15 11:39:54.021371] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:13.844 [2024-11-15 11:39:54.021382] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:13.844 [2024-11-15 11:39:54.021397] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:13.844 [2024-11-15 11:39:54.021411] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:13.844 [2024-11-15 11:39:54.025660] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:13.844 [2024-11-15 11:39:54.025713] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d79690 0 00:21:13.844 [2024-11-15 11:39:54.025859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:13.844 [2024-11-15 11:39:54.025876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:13.844 [2024-11-15 11:39:54.025884] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:13.844 [2024-11-15 11:39:54.025890] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:13.844 [2024-11-15 11:39:54.025924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.025936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.025943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d79690) 00:21:13.844 [2024-11-15 11:39:54.025957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:13.844 [2024-11-15 11:39:54.025983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb100, cid 0, qid 0 00:21:13.844 [2024-11-15 11:39:54.033317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.844 [2024-11-15 11:39:54.033336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.844 [2024-11-15 11:39:54.033344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.033351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb100) on tqpair=0x1d79690 00:21:13.844 [2024-11-15 11:39:54.033369] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:13.844 [2024-11-15 11:39:54.033381] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:13.844 [2024-11-15 11:39:54.033391] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:13.844 [2024-11-15 11:39:54.033409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.033418] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.033425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d79690) 00:21:13.844 [2024-11-15 11:39:54.033436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.844 [2024-11-15 11:39:54.033461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb100, cid 0, qid 0 00:21:13.844 [2024-11-15 11:39:54.033591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.844 [2024-11-15 11:39:54.033606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.844 [2024-11-15 11:39:54.033613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.033625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb100) on tqpair=0x1d79690 00:21:13.844 [2024-11-15 11:39:54.033634] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:13.844 [2024-11-15 11:39:54.033648] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:13.844 [2024-11-15 11:39:54.033661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.033668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.033675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d79690) 00:21:13.844 [2024-11-15 11:39:54.033686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.844 [2024-11-15 11:39:54.033708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb100, cid 0, qid 0 00:21:13.844 [2024-11-15 11:39:54.033785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.844 [2024-11-15 11:39:54.033797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.844 [2024-11-15 11:39:54.033804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.033811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb100) on tqpair=0x1d79690 00:21:13.844 [2024-11-15 11:39:54.033819] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:13.844 [2024-11-15 11:39:54.033833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:13.844 [2024-11-15 11:39:54.033846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.033853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.033860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d79690) 00:21:13.844 [2024-11-15 11:39:54.033870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.844 [2024-11-15 11:39:54.033892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb100, cid 0, qid 0 00:21:13.844 [2024-11-15 11:39:54.033967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.844 [2024-11-15 11:39:54.033980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.844 [2024-11-15 11:39:54.033987] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.033994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb100) on tqpair=0x1d79690 00:21:13.844 [2024-11-15 11:39:54.034002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:13.844 [2024-11-15 11:39:54.034019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.034028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.034034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d79690) 00:21:13.844 [2024-11-15 11:39:54.034045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.844 [2024-11-15 11:39:54.034066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb100, cid 0, qid 0 00:21:13.844 [2024-11-15 11:39:54.034140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.844 [2024-11-15 11:39:54.034152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.844 [2024-11-15 11:39:54.034159] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.034166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb100) on tqpair=0x1d79690 00:21:13.844 [2024-11-15 11:39:54.034173] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:13.844 [2024-11-15 11:39:54.034186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:13.844 [2024-11-15 11:39:54.034199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:13.844 [2024-11-15 11:39:54.034312] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:13.844 [2024-11-15 11:39:54.034323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:13.844 [2024-11-15 11:39:54.034336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.034344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.034350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d79690) 00:21:13.844 [2024-11-15 11:39:54.034361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.844 [2024-11-15 11:39:54.034383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb100, cid 0, qid 0 00:21:13.844 [2024-11-15 11:39:54.034493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.844 [2024-11-15 11:39:54.034505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.844 [2024-11-15 11:39:54.034512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.844 [2024-11-15 11:39:54.034519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb100) on tqpair=0x1d79690 00:21:13.844 [2024-11-15 11:39:54.034527] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:13.845 [2024-11-15 11:39:54.034543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.034553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.034559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d79690) 00:21:13.845 [2024-11-15 11:39:54.034570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.845 [2024-11-15 11:39:54.034590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb100, cid 0, qid 0 00:21:13.845 [2024-11-15 11:39:54.034670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.845 [2024-11-15 11:39:54.034684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.845 [2024-11-15 11:39:54.034692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.034698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb100) on tqpair=0x1d79690 00:21:13.845 [2024-11-15 11:39:54.034706] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:13.845 [2024-11-15 11:39:54.034714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:13.845 [2024-11-15 11:39:54.034728] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:13.845 [2024-11-15 11:39:54.034743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:13.845 [2024-11-15 11:39:54.034757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.034765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d79690) 00:21:13.845 [2024-11-15 11:39:54.034776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.845 [2024-11-15 11:39:54.034798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb100, cid 0, qid 0 00:21:13.845 [2024-11-15 11:39:54.034932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.845 [2024-11-15 11:39:54.034948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.845 [2024-11-15 11:39:54.034957] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.034963] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d79690): datao=0, datal=4096, cccid=0 00:21:13.845 [2024-11-15 11:39:54.034971] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ddb100) on tqpair(0x1d79690): expected_datao=0, payload_size=4096 00:21:13.845 [2024-11-15 11:39:54.034978] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.034988] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.034996] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.845 [2024-11-15 11:39:54.035018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.845 [2024-11-15 11:39:54.035025] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb100) on tqpair=0x1d79690 00:21:13.845 [2024-11-15 11:39:54.035042] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:13.845 [2024-11-15 11:39:54.035051] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:13.845 [2024-11-15 11:39:54.035058] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:13.845 [2024-11-15 11:39:54.035070] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:13.845 [2024-11-15 11:39:54.035079] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:13.845 [2024-11-15 11:39:54.035087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:13.845 [2024-11-15 11:39:54.035107] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:13.845 [2024-11-15 11:39:54.035120] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035135] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d79690) 00:21:13.845 [2024-11-15 11:39:54.035146] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.845 [2024-11-15 11:39:54.035169] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb100, cid 0, qid 0 00:21:13.845 [2024-11-15 11:39:54.035244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.845 [2024-11-15 11:39:54.035256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.845 [2024-11-15 11:39:54.035263] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035270] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb100) on tqpair=0x1d79690 00:21:13.845 [2024-11-15 11:39:54.035281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035288] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d79690) 00:21:13.845 [2024-11-15 11:39:54.035314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.845 [2024-11-15 11:39:54.035327] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d79690) 00:21:13.845 [2024-11-15 11:39:54.035357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.845 [2024-11-15 11:39:54.035371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d79690) 00:21:13.845 [2024-11-15 11:39:54.035395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.845 [2024-11-15 11:39:54.035405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d79690) 00:21:13.845 [2024-11-15 11:39:54.035427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.845 [2024-11-15 11:39:54.035436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:13.845 [2024-11-15 11:39:54.035451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:13.845 [2024-11-15 11:39:54.035463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d79690) 00:21:13.845 [2024-11-15 11:39:54.035481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.845 [2024-11-15 11:39:54.035504] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb100, cid 0, qid 0 00:21:13.845 [2024-11-15 11:39:54.035516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb280, cid 1, qid 0 00:21:13.845 [2024-11-15 11:39:54.035524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb400, cid 2, qid 0 00:21:13.845 [2024-11-15 11:39:54.035532] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb580, cid 3, qid 0 00:21:13.845 [2024-11-15 11:39:54.035539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb700, cid 4, qid 0 00:21:13.845 [2024-11-15 11:39:54.035681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.845 [2024-11-15 11:39:54.035693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.845 [2024-11-15 11:39:54.035700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb700) on tqpair=0x1d79690 00:21:13.845 [2024-11-15 11:39:54.035720] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:13.845 [2024-11-15 11:39:54.035730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:13.845 [2024-11-15 11:39:54.035744] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:13.845 [2024-11-15 11:39:54.035755] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:13.845 [2024-11-15 11:39:54.035766] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d79690) 00:21:13.845 [2024-11-15 11:39:54.035791] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.845 [2024-11-15 11:39:54.035812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb700, cid 4, qid 0 00:21:13.845 [2024-11-15 11:39:54.035893] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.845 [2024-11-15 11:39:54.035907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.845 [2024-11-15 11:39:54.035915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.035922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb700) on tqpair=0x1d79690 00:21:13.845 [2024-11-15 11:39:54.035991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:13.845 [2024-11-15 11:39:54.036011] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:13.845 [2024-11-15 11:39:54.036026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.845 [2024-11-15 11:39:54.036034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d79690) 00:21:13.845 [2024-11-15 11:39:54.036044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.845 [2024-11-15 11:39:54.036067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb700, cid 4, qid 0 00:21:13.845 [2024-11-15 11:39:54.036165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.845 [2024-11-15 11:39:54.036179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.845 [2024-11-15 11:39:54.036187] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.036193] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d79690): datao=0, datal=4096, cccid=4 00:21:13.846 [2024-11-15 11:39:54.036201] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ddb700) on tqpair(0x1d79690): expected_datao=0, payload_size=4096 00:21:13.846 [2024-11-15 11:39:54.036208] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.036225] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.036234] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.080317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.846 [2024-11-15 11:39:54.080337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.846 [2024-11-15 11:39:54.080345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.080353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb700) on tqpair=0x1d79690 00:21:13.846 [2024-11-15 11:39:54.080370] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:13.846 [2024-11-15 11:39:54.080392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:13.846 [2024-11-15 11:39:54.080412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:13.846 [2024-11-15 11:39:54.080427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.080436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d79690) 00:21:13.846 [2024-11-15 11:39:54.080448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.846 [2024-11-15 11:39:54.080472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb700, cid 4, qid 0 00:21:13.846 [2024-11-15 11:39:54.080609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.846 [2024-11-15 11:39:54.080622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.846 [2024-11-15 11:39:54.080630] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.080637] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d79690): datao=0, datal=4096, cccid=4 00:21:13.846 [2024-11-15 11:39:54.080645] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ddb700) on tqpair(0x1d79690): expected_datao=0, payload_size=4096 00:21:13.846 [2024-11-15 11:39:54.080656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.080674] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.080684] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.123318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.846 [2024-11-15 11:39:54.123337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.846 [2024-11-15 11:39:54.123344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.123352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb700) on tqpair=0x1d79690 00:21:13.846 [2024-11-15 11:39:54.123375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:13.846 [2024-11-15 11:39:54.123394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:13.846 [2024-11-15 11:39:54.123425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.123433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d79690) 00:21:13.846 [2024-11-15 11:39:54.123445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.846 [2024-11-15 11:39:54.123470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb700, cid 4, qid 0 00:21:13.846 [2024-11-15 11:39:54.123590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.846 [2024-11-15 11:39:54.123604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.846 [2024-11-15 11:39:54.123611] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.123618] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d79690): datao=0, datal=4096, cccid=4 00:21:13.846 [2024-11-15 11:39:54.123625] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ddb700) on tqpair(0x1d79690): expected_datao=0, payload_size=4096 00:21:13.846 [2024-11-15 11:39:54.123632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.123649] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.123659] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.123670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.846 [2024-11-15 11:39:54.123680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.846 [2024-11-15 11:39:54.123687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.123693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb700) on tqpair=0x1d79690 00:21:13.846 [2024-11-15 11:39:54.123707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:13.846 [2024-11-15 11:39:54.123721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:13.846 [2024-11-15 11:39:54.123737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:13.846 [2024-11-15 11:39:54.123749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:13.846 [2024-11-15 11:39:54.123757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:13.846 [2024-11-15 11:39:54.123766] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:13.846 [2024-11-15 11:39:54.123774] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:13.846 [2024-11-15 11:39:54.123786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:13.846 [2024-11-15 11:39:54.123795] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:13.846 [2024-11-15 11:39:54.123814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.123823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d79690) 00:21:13.846 [2024-11-15 11:39:54.123833] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.846 [2024-11-15 11:39:54.123845] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.123852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.123858] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d79690) 00:21:13.846 [2024-11-15 11:39:54.123868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:13.846 [2024-11-15 11:39:54.123895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb700, cid 4, qid 0 00:21:13.846 [2024-11-15 11:39:54.123922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb880, cid 5, qid 0 00:21:13.846 [2024-11-15 11:39:54.124123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.846 [2024-11-15 11:39:54.124139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.846 [2024-11-15 11:39:54.124146] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.124153] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb700) on tqpair=0x1d79690 00:21:13.846 [2024-11-15 11:39:54.124163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.846 [2024-11-15 11:39:54.124172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.846 [2024-11-15 11:39:54.124179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.124185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb880) on tqpair=0x1d79690 00:21:13.846 [2024-11-15 11:39:54.124201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.124211] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d79690) 00:21:13.846 [2024-11-15 11:39:54.124222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.846 [2024-11-15 11:39:54.124244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb880, cid 5, qid 0 00:21:13.846 [2024-11-15 11:39:54.124348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.846 [2024-11-15 11:39:54.124363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.846 [2024-11-15 11:39:54.124370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.124377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb880) on tqpair=0x1d79690 00:21:13.846 [2024-11-15 11:39:54.124393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.124403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d79690) 00:21:13.846 [2024-11-15 11:39:54.124413] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.846 [2024-11-15 11:39:54.124435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb880, cid 5, qid 0 00:21:13.846 [2024-11-15 11:39:54.124511] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.846 [2024-11-15 11:39:54.124523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.846 [2024-11-15 11:39:54.124531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.124537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb880) on tqpair=0x1d79690 00:21:13.846 [2024-11-15 11:39:54.124557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.124568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d79690) 00:21:13.846 [2024-11-15 11:39:54.124578] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.846 [2024-11-15 11:39:54.124599] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb880, cid 5, qid 0 00:21:13.846 [2024-11-15 11:39:54.124675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.846 [2024-11-15 11:39:54.124688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.846 [2024-11-15 11:39:54.124695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.124702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb880) on tqpair=0x1d79690 00:21:13.846 [2024-11-15 11:39:54.124726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.846 [2024-11-15 11:39:54.124738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d79690) 00:21:13.847 [2024-11-15 11:39:54.124748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.847 [2024-11-15 11:39:54.124761] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.124769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d79690) 00:21:13.847 [2024-11-15 11:39:54.124778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.847 [2024-11-15 11:39:54.124790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.124798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d79690) 00:21:13.847 [2024-11-15 11:39:54.124808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.847 [2024-11-15 11:39:54.124820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.124827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d79690) 00:21:13.847 [2024-11-15 11:39:54.124837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.847 [2024-11-15 11:39:54.124859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb880, cid 5, qid 0 00:21:13.847 [2024-11-15 11:39:54.124870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb700, cid 4, qid 0 00:21:13.847 [2024-11-15 11:39:54.124879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddba00, cid 6, qid 0 00:21:13.847 [2024-11-15 11:39:54.124886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddbb80, cid 7, qid 0 00:21:13.847 [2024-11-15 11:39:54.125043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.847 [2024-11-15 11:39:54.125056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.847 [2024-11-15 11:39:54.125063] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125070] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d79690): datao=0, datal=8192, cccid=5 00:21:13.847 [2024-11-15 11:39:54.125077] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ddb880) on tqpair(0x1d79690): expected_datao=0, payload_size=8192 00:21:13.847 [2024-11-15 11:39:54.125084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125105] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125114] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.847 [2024-11-15 11:39:54.125132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.847 [2024-11-15 11:39:54.125142] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125149] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d79690): datao=0, datal=512, cccid=4 00:21:13.847 [2024-11-15 11:39:54.125157] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ddb700) on tqpair(0x1d79690): expected_datao=0, payload_size=512 00:21:13.847 [2024-11-15 11:39:54.125164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125173] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125180] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.847 [2024-11-15 11:39:54.125198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.847 [2024-11-15 11:39:54.125204] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125210] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d79690): datao=0, datal=512, cccid=6 00:21:13.847 [2024-11-15 11:39:54.125218] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ddba00) on tqpair(0x1d79690): expected_datao=0, payload_size=512 00:21:13.847 [2024-11-15 11:39:54.125225] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125234] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125241] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:13.847 [2024-11-15 11:39:54.125258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:13.847 [2024-11-15 11:39:54.125265] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125271] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d79690): datao=0, datal=4096, cccid=7 00:21:13.847 [2024-11-15 11:39:54.125278] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ddbb80) on tqpair(0x1d79690): expected_datao=0, payload_size=4096 00:21:13.847 [2024-11-15 11:39:54.125286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125295] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125310] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.847 [2024-11-15 11:39:54.125333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.847 [2024-11-15 11:39:54.125340] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb880) on tqpair=0x1d79690 00:21:13.847 [2024-11-15 11:39:54.125368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.847 [2024-11-15 11:39:54.125380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.847 [2024-11-15 11:39:54.125387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb700) on tqpair=0x1d79690 00:21:13.847 [2024-11-15 11:39:54.125416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.847 [2024-11-15 11:39:54.125427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.847 [2024-11-15 11:39:54.125434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddba00) on tqpair=0x1d79690 00:21:13.847 [2024-11-15 11:39:54.125450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.847 [2024-11-15 11:39:54.125460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.847 [2024-11-15 11:39:54.125466] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.847 [2024-11-15 11:39:54.125473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddbb80) on tqpair=0x1d79690 00:21:13.847 ===================================================== 00:21:13.847 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:13.847 ===================================================== 00:21:13.847 Controller Capabilities/Features 00:21:13.847 ================================ 00:21:13.847 Vendor ID: 8086 00:21:13.847 Subsystem Vendor ID: 8086 00:21:13.847 Serial Number: SPDK00000000000001 00:21:13.847 Model Number: SPDK bdev Controller 00:21:13.847 Firmware Version: 25.01 00:21:13.847 Recommended Arb Burst: 6 00:21:13.847 IEEE OUI Identifier: e4 d2 5c 00:21:13.847 Multi-path I/O 00:21:13.847 May have multiple subsystem ports: Yes 00:21:13.847 May have multiple controllers: Yes 00:21:13.847 Associated with SR-IOV VF: No 00:21:13.847 Max Data Transfer Size: 131072 00:21:13.847 Max Number of Namespaces: 32 00:21:13.847 Max Number of I/O Queues: 127 00:21:13.847 NVMe Specification Version (VS): 1.3 00:21:13.847 NVMe Specification Version (Identify): 1.3 00:21:13.847 Maximum Queue Entries: 128 00:21:13.847 Contiguous Queues Required: Yes 00:21:13.847 Arbitration Mechanisms Supported 00:21:13.847 Weighted Round Robin: Not Supported 00:21:13.847 Vendor Specific: Not Supported 00:21:13.847 Reset Timeout: 15000 ms 00:21:13.847 Doorbell Stride: 4 bytes 00:21:13.847 NVM Subsystem Reset: Not Supported 00:21:13.847 Command Sets Supported 00:21:13.847 NVM Command Set: Supported 00:21:13.847 Boot Partition: Not Supported 00:21:13.847 Memory Page Size Minimum: 4096 bytes 00:21:13.847 Memory Page Size Maximum: 4096 bytes 00:21:13.847 Persistent Memory Region: Not Supported 00:21:13.847 Optional Asynchronous Events Supported 00:21:13.847 Namespace Attribute Notices: Supported 00:21:13.847 Firmware Activation Notices: Not Supported 00:21:13.847 ANA Change Notices: Not Supported 00:21:13.847 PLE Aggregate Log Change Notices: Not Supported 00:21:13.847 LBA Status Info Alert Notices: Not Supported 00:21:13.847 EGE Aggregate Log Change Notices: Not Supported 00:21:13.847 Normal NVM Subsystem Shutdown event: Not Supported 00:21:13.847 Zone Descriptor Change Notices: Not Supported 00:21:13.847 Discovery Log Change Notices: Not Supported 00:21:13.847 Controller Attributes 00:21:13.847 128-bit Host Identifier: Supported 00:21:13.847 Non-Operational Permissive Mode: Not Supported 00:21:13.847 NVM Sets: Not Supported 00:21:13.847 Read Recovery Levels: Not Supported 00:21:13.847 Endurance Groups: Not Supported 00:21:13.848 Predictable Latency Mode: Not Supported 00:21:13.848 Traffic Based Keep ALive: Not Supported 00:21:13.848 Namespace Granularity: Not Supported 00:21:13.848 SQ Associations: Not Supported 00:21:13.848 UUID List: Not Supported 00:21:13.848 Multi-Domain Subsystem: Not Supported 00:21:13.848 Fixed Capacity Management: Not Supported 00:21:13.848 Variable Capacity Management: Not Supported 00:21:13.848 Delete Endurance Group: Not Supported 00:21:13.848 Delete NVM Set: Not Supported 00:21:13.848 Extended LBA Formats Supported: Not Supported 00:21:13.848 Flexible Data Placement Supported: Not Supported 00:21:13.848 00:21:13.848 Controller Memory Buffer Support 00:21:13.848 ================================ 00:21:13.848 Supported: No 00:21:13.848 00:21:13.848 Persistent Memory Region Support 00:21:13.848 ================================ 00:21:13.848 Supported: No 00:21:13.848 00:21:13.848 Admin Command Set Attributes 00:21:13.848 ============================ 00:21:13.848 Security Send/Receive: Not Supported 00:21:13.848 Format NVM: Not Supported 00:21:13.848 Firmware Activate/Download: Not Supported 00:21:13.848 Namespace Management: Not Supported 00:21:13.848 Device Self-Test: Not Supported 00:21:13.848 Directives: Not Supported 00:21:13.848 NVMe-MI: Not Supported 00:21:13.848 Virtualization Management: Not Supported 00:21:13.848 Doorbell Buffer Config: Not Supported 00:21:13.848 Get LBA Status Capability: Not Supported 00:21:13.848 Command & Feature Lockdown Capability: Not Supported 00:21:13.848 Abort Command Limit: 4 00:21:13.848 Async Event Request Limit: 4 00:21:13.848 Number of Firmware Slots: N/A 00:21:13.848 Firmware Slot 1 Read-Only: N/A 00:21:13.848 Firmware Activation Without Reset: N/A 00:21:13.848 Multiple Update Detection Support: N/A 00:21:13.848 Firmware Update Granularity: No Information Provided 00:21:13.848 Per-Namespace SMART Log: No 00:21:13.848 Asymmetric Namespace Access Log Page: Not Supported 00:21:13.848 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:13.848 Command Effects Log Page: Supported 00:21:13.848 Get Log Page Extended Data: Supported 00:21:13.848 Telemetry Log Pages: Not Supported 00:21:13.848 Persistent Event Log Pages: Not Supported 00:21:13.848 Supported Log Pages Log Page: May Support 00:21:13.848 Commands Supported & Effects Log Page: Not Supported 00:21:13.848 Feature Identifiers & Effects Log Page:May Support 00:21:13.848 NVMe-MI Commands & Effects Log Page: May Support 00:21:13.848 Data Area 4 for Telemetry Log: Not Supported 00:21:13.848 Error Log Page Entries Supported: 128 00:21:13.848 Keep Alive: Supported 00:21:13.848 Keep Alive Granularity: 10000 ms 00:21:13.848 00:21:13.848 NVM Command Set Attributes 00:21:13.848 ========================== 00:21:13.848 Submission Queue Entry Size 00:21:13.848 Max: 64 00:21:13.848 Min: 64 00:21:13.848 Completion Queue Entry Size 00:21:13.848 Max: 16 00:21:13.848 Min: 16 00:21:13.848 Number of Namespaces: 32 00:21:13.848 Compare Command: Supported 00:21:13.848 Write Uncorrectable Command: Not Supported 00:21:13.848 Dataset Management Command: Supported 00:21:13.848 Write Zeroes Command: Supported 00:21:13.848 Set Features Save Field: Not Supported 00:21:13.848 Reservations: Supported 00:21:13.848 Timestamp: Not Supported 00:21:13.848 Copy: Supported 00:21:13.848 Volatile Write Cache: Present 00:21:13.848 Atomic Write Unit (Normal): 1 00:21:13.848 Atomic Write Unit (PFail): 1 00:21:13.848 Atomic Compare & Write Unit: 1 00:21:13.848 Fused Compare & Write: Supported 00:21:13.848 Scatter-Gather List 00:21:13.848 SGL Command Set: Supported 00:21:13.848 SGL Keyed: Supported 00:21:13.848 SGL Bit Bucket Descriptor: Not Supported 00:21:13.848 SGL Metadata Pointer: Not Supported 00:21:13.848 Oversized SGL: Not Supported 00:21:13.848 SGL Metadata Address: Not Supported 00:21:13.848 SGL Offset: Supported 00:21:13.848 Transport SGL Data Block: Not Supported 00:21:13.848 Replay Protected Memory Block: Not Supported 00:21:13.848 00:21:13.848 Firmware Slot Information 00:21:13.848 ========================= 00:21:13.848 Active slot: 1 00:21:13.848 Slot 1 Firmware Revision: 25.01 00:21:13.848 00:21:13.848 00:21:13.848 Commands Supported and Effects 00:21:13.848 ============================== 00:21:13.848 Admin Commands 00:21:13.848 -------------- 00:21:13.848 Get Log Page (02h): Supported 00:21:13.848 Identify (06h): Supported 00:21:13.848 Abort (08h): Supported 00:21:13.848 Set Features (09h): Supported 00:21:13.848 Get Features (0Ah): Supported 00:21:13.848 Asynchronous Event Request (0Ch): Supported 00:21:13.848 Keep Alive (18h): Supported 00:21:13.848 I/O Commands 00:21:13.848 ------------ 00:21:13.848 Flush (00h): Supported LBA-Change 00:21:13.848 Write (01h): Supported LBA-Change 00:21:13.848 Read (02h): Supported 00:21:13.848 Compare (05h): Supported 00:21:13.848 Write Zeroes (08h): Supported LBA-Change 00:21:13.848 Dataset Management (09h): Supported LBA-Change 00:21:13.848 Copy (19h): Supported LBA-Change 00:21:13.848 00:21:13.848 Error Log 00:21:13.848 ========= 00:21:13.848 00:21:13.848 Arbitration 00:21:13.848 =========== 00:21:13.848 Arbitration Burst: 1 00:21:13.848 00:21:13.848 Power Management 00:21:13.848 ================ 00:21:13.848 Number of Power States: 1 00:21:13.848 Current Power State: Power State #0 00:21:13.848 Power State #0: 00:21:13.848 Max Power: 0.00 W 00:21:13.848 Non-Operational State: Operational 00:21:13.848 Entry Latency: Not Reported 00:21:13.848 Exit Latency: Not Reported 00:21:13.848 Relative Read Throughput: 0 00:21:13.848 Relative Read Latency: 0 00:21:13.848 Relative Write Throughput: 0 00:21:13.848 Relative Write Latency: 0 00:21:13.848 Idle Power: Not Reported 00:21:13.848 Active Power: Not Reported 00:21:13.848 Non-Operational Permissive Mode: Not Supported 00:21:13.848 00:21:13.848 Health Information 00:21:13.848 ================== 00:21:13.848 Critical Warnings: 00:21:13.848 Available Spare Space: OK 00:21:13.848 Temperature: OK 00:21:13.848 Device Reliability: OK 00:21:13.848 Read Only: No 00:21:13.848 Volatile Memory Backup: OK 00:21:13.848 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:13.848 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:13.848 Available Spare: 0% 00:21:13.848 Available Spare Threshold: 0% 00:21:13.848 Life Percentage Used:[2024-11-15 11:39:54.125605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.848 [2024-11-15 11:39:54.125617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d79690) 00:21:13.848 [2024-11-15 11:39:54.125628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.848 [2024-11-15 11:39:54.125651] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddbb80, cid 7, qid 0 00:21:13.848 [2024-11-15 11:39:54.125814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.848 [2024-11-15 11:39:54.125827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.848 [2024-11-15 11:39:54.125834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.848 [2024-11-15 11:39:54.125841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddbb80) on tqpair=0x1d79690 00:21:13.848 [2024-11-15 11:39:54.125884] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:13.848 [2024-11-15 11:39:54.125903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb100) on tqpair=0x1d79690 00:21:13.848 [2024-11-15 11:39:54.125914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.848 [2024-11-15 11:39:54.125923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb280) on tqpair=0x1d79690 00:21:13.848 [2024-11-15 11:39:54.125930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.848 [2024-11-15 11:39:54.125938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb400) on tqpair=0x1d79690 00:21:13.848 [2024-11-15 11:39:54.125946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.848 [2024-11-15 11:39:54.125954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb580) on tqpair=0x1d79690 00:21:13.848 [2024-11-15 11:39:54.125961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:13.848 [2024-11-15 11:39:54.125973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.848 [2024-11-15 11:39:54.125982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.848 [2024-11-15 11:39:54.125988] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d79690) 00:21:13.848 [2024-11-15 11:39:54.125999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.848 [2024-11-15 11:39:54.126022] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb580, cid 3, qid 0 00:21:13.848 [2024-11-15 11:39:54.126216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.848 [2024-11-15 11:39:54.126230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.848 [2024-11-15 11:39:54.126238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.848 [2024-11-15 11:39:54.126244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb580) on tqpair=0x1d79690 00:21:13.848 [2024-11-15 11:39:54.126256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.848 [2024-11-15 11:39:54.126264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.849 [2024-11-15 11:39:54.126270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d79690) 00:21:13.849 [2024-11-15 11:39:54.126281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.849 [2024-11-15 11:39:54.126315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb580, cid 3, qid 0 00:21:13.849 [2024-11-15 11:39:54.126449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.849 [2024-11-15 11:39:54.126464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.849 [2024-11-15 11:39:54.126471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.849 [2024-11-15 11:39:54.126478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb580) on tqpair=0x1d79690 00:21:13.849 [2024-11-15 11:39:54.126490] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:13.849 [2024-11-15 11:39:54.126498] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:13.849 [2024-11-15 11:39:54.126515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.849 [2024-11-15 11:39:54.126524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.849 [2024-11-15 11:39:54.126531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d79690) 00:21:13.849 [2024-11-15 11:39:54.126541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.849 [2024-11-15 11:39:54.126563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb580, cid 3, qid 0 00:21:13.849 [2024-11-15 11:39:54.126691] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.849 [2024-11-15 11:39:54.126705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.849 [2024-11-15 11:39:54.126713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.849 [2024-11-15 11:39:54.126719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb580) on tqpair=0x1d79690 00:21:13.849 [2024-11-15 11:39:54.126736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.849 [2024-11-15 11:39:54.126746] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.849 [2024-11-15 11:39:54.126753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d79690) 00:21:13.849 [2024-11-15 11:39:54.126763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.849 [2024-11-15 11:39:54.126784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb580, cid 3, qid 0 00:21:13.849 [2024-11-15 11:39:54.126884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.849 [2024-11-15 11:39:54.126897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.849 [2024-11-15 11:39:54.126905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.849 [2024-11-15 11:39:54.126911] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb580) on tqpair=0x1d79690 00:21:13.849 [2024-11-15 11:39:54.126928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.849 [2024-11-15 11:39:54.126937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.849 [2024-11-15 11:39:54.126944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d79690) 00:21:13.849 [2024-11-15 11:39:54.126955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.849 [2024-11-15 11:39:54.126976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb580, cid 3, qid 0 00:21:13.849 [2024-11-15 11:39:54.127048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.849 [2024-11-15 11:39:54.127062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.849 [2024-11-15 11:39:54.127069] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.849 [2024-11-15 11:39:54.127076] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb580) on tqpair=0x1d79690 00:21:13.849 [2024-11-15 11:39:54.127093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.849 [2024-11-15 11:39:54.127102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.849 [2024-11-15 11:39:54.127109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d79690) 00:21:13.849 [2024-11-15 11:39:54.127119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.849 [2024-11-15 11:39:54.127140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb580, cid 3, qid 0 00:21:13.849 [2024-11-15 11:39:54.127215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.849 [2024-11-15 11:39:54.127229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.849 [2024-11-15 11:39:54.127240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.849 [2024-11-15 11:39:54.127247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb580) on tqpair=0x1d79690 00:21:13.849 [2024-11-15 11:39:54.127264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:13.849 [2024-11-15 11:39:54.127274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:13.849 [2024-11-15 11:39:54.127280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d79690) 00:21:13.849 [2024-11-15 11:39:54.127291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.849 [2024-11-15 11:39:54.131321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ddb580, cid 3, qid 0 00:21:13.849 [2024-11-15 11:39:54.131481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:13.849 [2024-11-15 11:39:54.131495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:13.849 [2024-11-15 11:39:54.131502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:13.849 [2024-11-15 11:39:54.131508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ddb580) on tqpair=0x1d79690 00:21:13.849 [2024-11-15 11:39:54.131522] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:21:13.849 0% 00:21:13.849 Data Units Read: 0 00:21:13.849 Data Units Written: 0 00:21:13.849 Host Read Commands: 0 00:21:13.849 Host Write Commands: 0 00:21:13.849 Controller Busy Time: 0 minutes 00:21:13.849 Power Cycles: 0 00:21:13.849 Power On Hours: 0 hours 00:21:13.849 Unsafe Shutdowns: 0 00:21:13.849 Unrecoverable Media Errors: 0 00:21:13.849 Lifetime Error Log Entries: 0 00:21:13.849 Warning Temperature Time: 0 minutes 00:21:13.849 Critical Temperature Time: 0 minutes 00:21:13.849 00:21:13.849 Number of Queues 00:21:13.849 ================ 00:21:13.849 Number of I/O Submission Queues: 127 00:21:13.849 Number of I/O Completion Queues: 127 00:21:13.849 00:21:13.849 Active Namespaces 00:21:13.849 ================= 00:21:13.849 Namespace ID:1 00:21:13.849 Error Recovery Timeout: Unlimited 00:21:13.849 Command Set Identifier: NVM (00h) 00:21:13.849 Deallocate: Supported 00:21:13.849 Deallocated/Unwritten Error: Not Supported 00:21:13.849 Deallocated Read Value: Unknown 00:21:13.849 Deallocate in Write Zeroes: Not Supported 00:21:13.849 Deallocated Guard Field: 0xFFFF 00:21:13.849 Flush: Supported 00:21:13.849 Reservation: Supported 00:21:13.849 Namespace Sharing Capabilities: Multiple Controllers 00:21:13.849 Size (in LBAs): 131072 (0GiB) 00:21:13.849 Capacity (in LBAs): 131072 (0GiB) 00:21:13.849 Utilization (in LBAs): 131072 (0GiB) 00:21:13.849 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:13.849 EUI64: ABCDEF0123456789 00:21:13.849 UUID: 5755ae52-a9bb-4f59-9052-5d331b0fa074 00:21:13.849 Thin Provisioning: Not Supported 00:21:13.849 Per-NS Atomic Units: Yes 00:21:13.849 Atomic Boundary Size (Normal): 0 00:21:13.849 Atomic Boundary Size (PFail): 0 00:21:13.849 Atomic Boundary Offset: 0 00:21:13.849 Maximum Single Source Range Length: 65535 00:21:13.849 Maximum Copy Length: 65535 00:21:13.849 Maximum Source Range Count: 1 00:21:13.849 NGUID/EUI64 Never Reused: No 00:21:13.849 Namespace Write Protected: No 00:21:13.849 Number of LBA Formats: 1 00:21:13.849 Current LBA Format: LBA Format #00 00:21:13.849 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:13.849 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:13.849 rmmod nvme_tcp 00:21:13.849 rmmod nvme_fabrics 00:21:13.849 rmmod nvme_keyring 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2987813 ']' 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2987813 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2987813 ']' 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2987813 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.849 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2987813 00:21:14.108 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.108 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.108 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2987813' 00:21:14.108 killing process with pid 2987813 00:21:14.108 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2987813 00:21:14.108 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2987813 00:21:14.108 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:14.108 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:14.108 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:14.108 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:14.108 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:14.108 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:14.108 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:14.108 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:14.108 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:14.108 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.108 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.108 11:39:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.636 11:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.636 00:21:16.636 real 0m5.627s 00:21:16.636 user 0m4.722s 00:21:16.636 sys 0m1.997s 00:21:16.636 11:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.636 11:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:16.636 ************************************ 00:21:16.636 END TEST nvmf_identify 00:21:16.636 ************************************ 00:21:16.636 11:39:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:16.636 11:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:16.636 11:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.636 11:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.636 ************************************ 00:21:16.636 START TEST nvmf_perf 00:21:16.636 ************************************ 00:21:16.636 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:16.636 * Looking for test storage... 00:21:16.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:16.636 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:16.636 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:16.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.637 --rc genhtml_branch_coverage=1 00:21:16.637 --rc genhtml_function_coverage=1 00:21:16.637 --rc genhtml_legend=1 00:21:16.637 --rc geninfo_all_blocks=1 00:21:16.637 --rc geninfo_unexecuted_blocks=1 00:21:16.637 00:21:16.637 ' 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:16.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.637 --rc genhtml_branch_coverage=1 00:21:16.637 --rc genhtml_function_coverage=1 00:21:16.637 --rc genhtml_legend=1 00:21:16.637 --rc geninfo_all_blocks=1 00:21:16.637 --rc geninfo_unexecuted_blocks=1 00:21:16.637 00:21:16.637 ' 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:16.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.637 --rc genhtml_branch_coverage=1 00:21:16.637 --rc genhtml_function_coverage=1 00:21:16.637 --rc genhtml_legend=1 00:21:16.637 --rc geninfo_all_blocks=1 00:21:16.637 --rc geninfo_unexecuted_blocks=1 00:21:16.637 00:21:16.637 ' 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:16.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.637 --rc genhtml_branch_coverage=1 00:21:16.637 --rc genhtml_function_coverage=1 00:21:16.637 --rc genhtml_legend=1 00:21:16.637 --rc geninfo_all_blocks=1 00:21:16.637 --rc geninfo_unexecuted_blocks=1 00:21:16.637 00:21:16.637 ' 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.637 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:16.638 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:16.638 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:16.638 11:39:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:18.540 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:18.540 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.540 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:18.541 Found net devices under 0000:09:00.0: cvl_0_0 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:18.541 Found net devices under 0000:09:00.1: cvl_0_1 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:18.541 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:18.800 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.800 11:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:18.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:21:18.800 00:21:18.800 --- 10.0.0.2 ping statistics --- 00:21:18.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.800 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:21:18.800 00:21:18.800 --- 10.0.0.1 ping statistics --- 00:21:18.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.800 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2989909 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2989909 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2989909 ']' 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.800 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:18.800 [2024-11-15 11:39:59.171730] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:21:18.800 [2024-11-15 11:39:59.171839] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.058 [2024-11-15 11:39:59.244745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:19.058 [2024-11-15 11:39:59.300396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.058 [2024-11-15 11:39:59.300451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.058 [2024-11-15 11:39:59.300473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.058 [2024-11-15 11:39:59.300483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.058 [2024-11-15 11:39:59.300492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.058 [2024-11-15 11:39:59.302110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.058 [2024-11-15 11:39:59.302218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.058 [2024-11-15 11:39:59.302328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:19.058 [2024-11-15 11:39:59.302333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.058 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.058 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:21:19.058 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:19.058 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:19.058 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:19.058 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.058 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:19.058 11:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:22.336 11:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:22.336 11:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:22.593 11:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:21:22.593 11:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:22.851 11:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:22.851 11:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:21:22.851 11:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:22.851 11:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:22.851 11:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:23.108 [2024-11-15 11:40:03.435697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.108 11:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.366 11:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:23.366 11:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:23.624 11:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:23.624 11:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:23.882 11:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:24.138 [2024-11-15 11:40:04.515595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.138 11:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:24.395 11:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:21:24.395 11:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:21:24.396 11:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:24.396 11:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:21:25.768 Initializing NVMe Controllers 00:21:25.768 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:21:25.768 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:21:25.768 Initialization complete. Launching workers. 00:21:25.768 ======================================================== 00:21:25.768 Latency(us) 00:21:25.768 Device Information : IOPS MiB/s Average min max 00:21:25.768 PCIE (0000:0b:00.0) NSID 1 from core 0: 85385.96 333.54 374.32 31.83 5060.46 00:21:25.768 ======================================================== 00:21:25.768 Total : 85385.96 333.54 374.32 31.83 5060.46 00:21:25.768 00:21:25.768 11:40:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:27.141 Initializing NVMe Controllers 00:21:27.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:27.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:27.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:27.142 Initialization complete. Launching workers. 00:21:27.142 ======================================================== 00:21:27.142 Latency(us) 00:21:27.142 Device Information : IOPS MiB/s Average min max 00:21:27.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 64.90 0.25 15807.95 156.71 44827.99 00:21:27.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 50.92 0.20 20106.58 6982.55 55848.38 00:21:27.142 ======================================================== 00:21:27.142 Total : 115.82 0.45 17697.86 156.71 55848.38 00:21:27.142 00:21:27.142 11:40:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:28.515 Initializing NVMe Controllers 00:21:28.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:28.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:28.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:28.515 Initialization complete. Launching workers. 00:21:28.515 ======================================================== 00:21:28.515 Latency(us) 00:21:28.515 Device Information : IOPS MiB/s Average min max 00:21:28.515 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8513.99 33.26 3759.25 659.34 7866.13 00:21:28.515 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3879.00 15.15 8293.69 6824.68 15975.73 00:21:28.515 ======================================================== 00:21:28.515 Total : 12392.99 48.41 5178.52 659.34 15975.73 00:21:28.515 00:21:28.515 11:40:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:28.515 11:40:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:28.515 11:40:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:31.041 Initializing NVMe Controllers 00:21:31.041 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:31.041 Controller IO queue size 128, less than required. 00:21:31.041 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.041 Controller IO queue size 128, less than required. 00:21:31.041 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:31.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:31.041 Initialization complete. Launching workers. 00:21:31.041 ======================================================== 00:21:31.041 Latency(us) 00:21:31.041 Device Information : IOPS MiB/s Average min max 00:21:31.041 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1673.96 418.49 78045.37 54475.04 124776.07 00:21:31.041 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 564.32 141.08 234395.03 66960.36 403265.81 00:21:31.041 ======================================================== 00:21:31.041 Total : 2238.27 559.57 117464.48 54475.04 403265.81 00:21:31.041 00:21:31.041 11:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:31.298 No valid NVMe controllers or AIO or URING devices found 00:21:31.298 Initializing NVMe Controllers 00:21:31.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:31.298 Controller IO queue size 128, less than required. 00:21:31.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.298 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:31.298 Controller IO queue size 128, less than required. 00:21:31.298 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:31.298 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:31.298 WARNING: Some requested NVMe devices were skipped 00:21:31.556 11:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:34.086 Initializing NVMe Controllers 00:21:34.086 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:34.086 Controller IO queue size 128, less than required. 00:21:34.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:34.086 Controller IO queue size 128, less than required. 00:21:34.086 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:34.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:34.086 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:34.086 Initialization complete. Launching workers. 00:21:34.086 00:21:34.086 ==================== 00:21:34.086 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:34.086 TCP transport: 00:21:34.086 polls: 8927 00:21:34.086 idle_polls: 5540 00:21:34.086 sock_completions: 3387 00:21:34.086 nvme_completions: 6073 00:21:34.086 submitted_requests: 9002 00:21:34.086 queued_requests: 1 00:21:34.086 00:21:34.086 ==================== 00:21:34.086 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:34.086 TCP transport: 00:21:34.086 polls: 11829 00:21:34.086 idle_polls: 8390 00:21:34.086 sock_completions: 3439 00:21:34.086 nvme_completions: 6503 00:21:34.086 submitted_requests: 9726 00:21:34.086 queued_requests: 1 00:21:34.086 ======================================================== 00:21:34.086 Latency(us) 00:21:34.086 Device Information : IOPS MiB/s Average min max 00:21:34.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1515.65 378.91 86810.71 62222.09 155714.66 00:21:34.086 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1622.99 405.75 79540.19 42645.37 123781.15 00:21:34.086 ======================================================== 00:21:34.086 Total : 3138.64 784.66 83051.13 42645.37 155714.66 00:21:34.086 00:21:34.086 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:34.086 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:34.344 rmmod nvme_tcp 00:21:34.344 rmmod nvme_fabrics 00:21:34.344 rmmod nvme_keyring 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2989909 ']' 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2989909 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2989909 ']' 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2989909 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2989909 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2989909' 00:21:34.344 killing process with pid 2989909 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2989909 00:21:34.344 11:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2989909 00:21:36.243 11:40:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:36.243 11:40:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:36.243 11:40:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:36.243 11:40:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:36.243 11:40:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:36.243 11:40:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:36.243 11:40:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:36.243 11:40:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:36.243 11:40:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:36.243 11:40:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.243 11:40:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.243 11:40:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:38.147 00:21:38.147 real 0m21.680s 00:21:38.147 user 1m6.719s 00:21:38.147 sys 0m5.605s 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:38.147 ************************************ 00:21:38.147 END TEST nvmf_perf 00:21:38.147 ************************************ 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.147 ************************************ 00:21:38.147 START TEST nvmf_fio_host 00:21:38.147 ************************************ 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:38.147 * Looking for test storage... 00:21:38.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:38.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.147 --rc genhtml_branch_coverage=1 00:21:38.147 --rc genhtml_function_coverage=1 00:21:38.147 --rc genhtml_legend=1 00:21:38.147 --rc geninfo_all_blocks=1 00:21:38.147 --rc geninfo_unexecuted_blocks=1 00:21:38.147 00:21:38.147 ' 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:38.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.147 --rc genhtml_branch_coverage=1 00:21:38.147 --rc genhtml_function_coverage=1 00:21:38.147 --rc genhtml_legend=1 00:21:38.147 --rc geninfo_all_blocks=1 00:21:38.147 --rc geninfo_unexecuted_blocks=1 00:21:38.147 00:21:38.147 ' 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:38.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.147 --rc genhtml_branch_coverage=1 00:21:38.147 --rc genhtml_function_coverage=1 00:21:38.147 --rc genhtml_legend=1 00:21:38.147 --rc geninfo_all_blocks=1 00:21:38.147 --rc geninfo_unexecuted_blocks=1 00:21:38.147 00:21:38.147 ' 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:38.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.147 --rc genhtml_branch_coverage=1 00:21:38.147 --rc genhtml_function_coverage=1 00:21:38.147 --rc genhtml_legend=1 00:21:38.147 --rc geninfo_all_blocks=1 00:21:38.147 --rc geninfo_unexecuted_blocks=1 00:21:38.147 00:21:38.147 ' 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.147 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:38.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:21:38.148 11:40:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.686 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.686 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:21:40.686 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:40.686 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:40.686 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:40.686 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:40.686 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:40.686 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:21:40.686 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:40.686 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:21:40.686 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:21:40.686 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:21:40.686 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:40.687 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:40.687 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:40.687 Found net devices under 0000:09:00.0: cvl_0_0 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:40.687 Found net devices under 0000:09:00.1: cvl_0_1 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:40.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:21:40.687 00:21:40.687 --- 10.0.0.2 ping statistics --- 00:21:40.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.687 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:21:40.687 00:21:40.687 --- 10.0.0.1 ping statistics --- 00:21:40.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.687 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2993881 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2993881 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2993881 ']' 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.687 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.688 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.688 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.688 11:40:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.688 [2024-11-15 11:40:20.810943] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:21:40.688 [2024-11-15 11:40:20.811038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.688 [2024-11-15 11:40:20.883130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:40.688 [2024-11-15 11:40:20.941001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.688 [2024-11-15 11:40:20.941050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.688 [2024-11-15 11:40:20.941071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.688 [2024-11-15 11:40:20.941082] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.688 [2024-11-15 11:40:20.941093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.688 [2024-11-15 11:40:20.942721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.688 [2024-11-15 11:40:20.942787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.688 [2024-11-15 11:40:20.942855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.688 [2024-11-15 11:40:20.942858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.688 11:40:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.688 11:40:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:21:40.688 11:40:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:41.253 [2024-11-15 11:40:21.370790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.253 11:40:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:41.253 11:40:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:41.253 11:40:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.253 11:40:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:41.510 Malloc1 00:21:41.510 11:40:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:41.768 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:42.026 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.283 [2024-11-15 11:40:22.585966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.283 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:42.540 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:42.541 11:40:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:42.798 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:42.798 fio-3.35 00:21:42.798 Starting 1 thread 00:21:45.358 00:21:45.358 test: (groupid=0, jobs=1): err= 0: pid=2994238: Fri Nov 15 11:40:25 2024 00:21:45.358 read: IOPS=8908, BW=34.8MiB/s (36.5MB/s)(69.8MiB/2006msec) 00:21:45.358 slat (nsec): min=1914, max=162053, avg=2532.43, stdev=1894.28 00:21:45.358 clat (usec): min=2500, max=13882, avg=7879.54, stdev=610.02 00:21:45.358 lat (usec): min=2529, max=13885, avg=7882.07, stdev=609.92 00:21:45.358 clat percentiles (usec): 00:21:45.358 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7439], 00:21:45.358 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:21:45.358 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8717], 00:21:45.358 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[12256], 99.95th=[13173], 00:21:45.358 | 99.99th=[13829] 00:21:45.358 bw ( KiB/s): min=34672, max=36144, per=99.92%, avg=35604.00, stdev=661.01, samples=4 00:21:45.358 iops : min= 8668, max= 9036, avg=8901.00, stdev=165.25, samples=4 00:21:45.358 write: IOPS=8923, BW=34.9MiB/s (36.5MB/s)(69.9MiB/2006msec); 0 zone resets 00:21:45.358 slat (usec): min=2, max=142, avg= 2.67, stdev= 1.46 00:21:45.358 clat (usec): min=1448, max=12256, avg=6414.10, stdev=524.06 00:21:45.358 lat (usec): min=1458, max=12258, avg=6416.77, stdev=524.03 00:21:45.358 clat percentiles (usec): 00:21:45.358 | 1.00th=[ 5211], 5.00th=[ 5669], 10.00th=[ 5800], 20.00th=[ 5997], 00:21:45.358 | 30.00th=[ 6194], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:21:45.358 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7177], 00:21:45.358 | 99.00th=[ 7570], 99.50th=[ 7701], 99.90th=[11207], 99.95th=[11469], 00:21:45.358 | 99.99th=[12256] 00:21:45.358 bw ( KiB/s): min=35464, max=35968, per=99.97%, avg=35682.00, stdev=215.89, samples=4 00:21:45.358 iops : min= 8866, max= 8992, avg=8920.50, stdev=53.97, samples=4 00:21:45.358 lat (msec) : 2=0.02%, 4=0.12%, 10=99.71%, 20=0.15% 00:21:45.358 cpu : usr=65.04%, sys=33.27%, ctx=74, majf=0, minf=32 00:21:45.358 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:45.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:45.358 issued rwts: total=17870,17900,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:45.358 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:45.358 00:21:45.358 Run status group 0 (all jobs): 00:21:45.358 READ: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), io=69.8MiB (73.2MB), run=2006-2006msec 00:21:45.358 WRITE: bw=34.9MiB/s (36.5MB/s), 34.9MiB/s-34.9MiB/s (36.5MB/s-36.5MB/s), io=69.9MiB (73.3MB), run=2006-2006msec 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:45.358 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:45.359 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:45.359 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:45.359 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:45.359 11:40:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:45.359 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:45.359 fio-3.35 00:21:45.359 Starting 1 thread 00:21:47.959 00:21:47.959 test: (groupid=0, jobs=1): err= 0: pid=2994579: Fri Nov 15 11:40:28 2024 00:21:47.959 read: IOPS=8527, BW=133MiB/s (140MB/s)(267MiB/2007msec) 00:21:47.959 slat (nsec): min=2795, max=90304, avg=3435.30, stdev=1453.45 00:21:47.959 clat (usec): min=2215, max=16356, avg=8606.78, stdev=1847.65 00:21:47.959 lat (usec): min=2219, max=16359, avg=8610.21, stdev=1847.67 00:21:47.959 clat percentiles (usec): 00:21:47.959 | 1.00th=[ 4686], 5.00th=[ 5538], 10.00th=[ 6259], 20.00th=[ 7046], 00:21:47.959 | 30.00th=[ 7635], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9110], 00:21:47.959 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10945], 95.00th=[11600], 00:21:47.959 | 99.00th=[13435], 99.50th=[13829], 99.90th=[15926], 99.95th=[16057], 00:21:47.959 | 99.99th=[16319] 00:21:47.959 bw ( KiB/s): min=61696, max=78786, per=51.62%, avg=70432.50, stdev=9009.44, samples=4 00:21:47.959 iops : min= 3856, max= 4924, avg=4402.00, stdev=563.05, samples=4 00:21:47.959 write: IOPS=5138, BW=80.3MiB/s (84.2MB/s)(144MiB/1795msec); 0 zone resets 00:21:47.959 slat (usec): min=30, max=160, avg=32.75, stdev= 4.45 00:21:47.959 clat (usec): min=4039, max=18278, avg=11328.10, stdev=1876.22 00:21:47.959 lat (usec): min=4070, max=18312, avg=11360.85, stdev=1876.20 00:21:47.959 clat percentiles (usec): 00:21:47.959 | 1.00th=[ 7570], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9634], 00:21:47.959 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11207], 60.00th=[11731], 00:21:47.959 | 70.00th=[12256], 80.00th=[12911], 90.00th=[13829], 95.00th=[14615], 00:21:47.959 | 99.00th=[15926], 99.50th=[16450], 99.90th=[16909], 99.95th=[16909], 00:21:47.959 | 99.99th=[18220] 00:21:47.959 bw ( KiB/s): min=64960, max=81756, per=89.39%, avg=73495.00, stdev=9067.51, samples=4 00:21:47.959 iops : min= 4060, max= 5109, avg=4593.25, stdev=566.49, samples=4 00:21:47.959 lat (msec) : 4=0.20%, 10=60.39%, 20=39.41% 00:21:47.959 cpu : usr=76.83%, sys=21.97%, ctx=35, majf=0, minf=64 00:21:47.959 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:47.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:47.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:47.960 issued rwts: total=17115,9224,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:47.960 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:47.960 00:21:47.960 Run status group 0 (all jobs): 00:21:47.960 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=267MiB (280MB), run=2007-2007msec 00:21:47.960 WRITE: bw=80.3MiB/s (84.2MB/s), 80.3MiB/s-80.3MiB/s (84.2MB/s-84.2MB/s), io=144MiB (151MB), run=1795-1795msec 00:21:47.960 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:48.217 rmmod nvme_tcp 00:21:48.217 rmmod nvme_fabrics 00:21:48.217 rmmod nvme_keyring 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2993881 ']' 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2993881 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2993881 ']' 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2993881 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2993881 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2993881' 00:21:48.217 killing process with pid 2993881 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2993881 00:21:48.217 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2993881 00:21:48.475 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:48.475 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:48.475 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:48.475 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:21:48.475 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:21:48.475 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:48.475 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:48.475 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:48.475 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:48.475 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.475 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.475 11:40:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.381 11:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:50.381 00:21:50.381 real 0m12.478s 00:21:50.381 user 0m37.109s 00:21:50.381 sys 0m4.013s 00:21:50.381 11:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.381 11:40:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.381 ************************************ 00:21:50.381 END TEST nvmf_fio_host 00:21:50.381 ************************************ 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.641 ************************************ 00:21:50.641 START TEST nvmf_failover 00:21:50.641 ************************************ 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:50.641 * Looking for test storage... 00:21:50.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:50.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.641 --rc genhtml_branch_coverage=1 00:21:50.641 --rc genhtml_function_coverage=1 00:21:50.641 --rc genhtml_legend=1 00:21:50.641 --rc geninfo_all_blocks=1 00:21:50.641 --rc geninfo_unexecuted_blocks=1 00:21:50.641 00:21:50.641 ' 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:50.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.641 --rc genhtml_branch_coverage=1 00:21:50.641 --rc genhtml_function_coverage=1 00:21:50.641 --rc genhtml_legend=1 00:21:50.641 --rc geninfo_all_blocks=1 00:21:50.641 --rc geninfo_unexecuted_blocks=1 00:21:50.641 00:21:50.641 ' 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:50.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.641 --rc genhtml_branch_coverage=1 00:21:50.641 --rc genhtml_function_coverage=1 00:21:50.641 --rc genhtml_legend=1 00:21:50.641 --rc geninfo_all_blocks=1 00:21:50.641 --rc geninfo_unexecuted_blocks=1 00:21:50.641 00:21:50.641 ' 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:50.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:50.641 --rc genhtml_branch_coverage=1 00:21:50.641 --rc genhtml_function_coverage=1 00:21:50.641 --rc genhtml_legend=1 00:21:50.641 --rc geninfo_all_blocks=1 00:21:50.641 --rc geninfo_unexecuted_blocks=1 00:21:50.641 00:21:50.641 ' 00:21:50.641 11:40:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.641 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:50.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:21:50.642 11:40:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:53.175 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:53.175 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:53.175 Found net devices under 0000:09:00.0: cvl_0_0 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:53.175 Found net devices under 0000:09:00.1: cvl_0_1 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.175 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:53.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:21:53.176 00:21:53.176 --- 10.0.0.2 ping statistics --- 00:21:53.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.176 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:21:53.176 00:21:53.176 --- 10.0.0.1 ping statistics --- 00:21:53.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.176 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2996899 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2996899 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2996899 ']' 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.176 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:53.176 [2024-11-15 11:40:33.392994] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:21:53.176 [2024-11-15 11:40:33.393065] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.176 [2024-11-15 11:40:33.464984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:53.176 [2024-11-15 11:40:33.523103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.176 [2024-11-15 11:40:33.523151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.176 [2024-11-15 11:40:33.523175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.176 [2024-11-15 11:40:33.523185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.176 [2024-11-15 11:40:33.523195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.176 [2024-11-15 11:40:33.524542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.176 [2024-11-15 11:40:33.524606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.176 [2024-11-15 11:40:33.524609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.434 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.434 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:53.434 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:53.434 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:53.434 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:53.434 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.434 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:53.692 [2024-11-15 11:40:33.970347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.692 11:40:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:53.950 Malloc0 00:21:53.950 11:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:54.208 11:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:54.464 11:40:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:54.721 [2024-11-15 11:40:35.087766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.721 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:54.979 [2024-11-15 11:40:35.364519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:54.979 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:55.236 [2024-11-15 11:40:35.633385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:55.236 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2997193 00:21:55.236 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:55.236 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:55.236 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2997193 /var/tmp/bdevperf.sock 00:21:55.236 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2997193 ']' 00:21:55.236 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.236 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.236 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.236 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.236 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:55.801 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.801 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:55.801 11:40:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:56.058 NVMe0n1 00:21:56.058 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:56.624 00:21:56.624 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2997328 00:21:56.624 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:56.624 11:40:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:57.558 11:40:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:57.815 [2024-11-15 11:40:38.062226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93380 is same with the state(6) to be set 00:21:57.815 [2024-11-15 11:40:38.062297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93380 is same with the state(6) to be set 00:21:57.815 [2024-11-15 11:40:38.062336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93380 is same with the state(6) to be set 00:21:57.815 [2024-11-15 11:40:38.062360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93380 is same with the state(6) to be set 00:21:57.815 [2024-11-15 11:40:38.062372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93380 is same with the state(6) to be set 00:21:57.815 [2024-11-15 11:40:38.062384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93380 is same with the state(6) to be set 00:21:57.815 [2024-11-15 11:40:38.062395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93380 is same with the state(6) to be set 00:21:57.815 11:40:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:01.097 11:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:01.097 00:22:01.097 11:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:01.355 [2024-11-15 11:40:41.753708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93e30 is same with the state(6) to be set 00:22:01.355 [2024-11-15 11:40:41.753765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf93e30 is same with the state(6) to be set 00:22:01.355 11:40:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:04.637 11:40:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.895 [2024-11-15 11:40:45.081738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.895 11:40:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:05.829 11:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:06.087 11:40:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2997328 00:22:12.651 { 00:22:12.651 "results": [ 00:22:12.651 { 00:22:12.651 "job": "NVMe0n1", 00:22:12.651 "core_mask": "0x1", 00:22:12.651 "workload": "verify", 00:22:12.651 "status": "finished", 00:22:12.651 "verify_range": { 00:22:12.651 "start": 0, 00:22:12.651 "length": 16384 00:22:12.651 }, 00:22:12.651 "queue_depth": 128, 00:22:12.651 "io_size": 4096, 00:22:12.651 "runtime": 15.045373, 00:22:12.651 "iops": 8482.342046288915, 00:22:12.651 "mibps": 33.134148618316075, 00:22:12.651 "io_failed": 9133, 00:22:12.651 "io_timeout": 0, 00:22:12.651 "avg_latency_us": 14017.268613805209, 00:22:12.651 "min_latency_us": 552.2014814814814, 00:22:12.651 "max_latency_us": 41748.85925925926 00:22:12.651 } 00:22:12.651 ], 00:22:12.651 "core_count": 1 00:22:12.651 } 00:22:12.651 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2997193 00:22:12.651 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2997193 ']' 00:22:12.651 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2997193 00:22:12.651 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:12.651 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.651 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2997193 00:22:12.651 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:12.651 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:12.651 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2997193' 00:22:12.651 killing process with pid 2997193 00:22:12.651 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2997193 00:22:12.652 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2997193 00:22:12.652 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:12.652 [2024-11-15 11:40:35.699930] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:22:12.652 [2024-11-15 11:40:35.700029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997193 ] 00:22:12.652 [2024-11-15 11:40:35.767858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.652 [2024-11-15 11:40:35.826842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.652 Running I/O for 15 seconds... 00:22:12.652 8560.00 IOPS, 33.44 MiB/s [2024-11-15T10:40:53.079Z] [2024-11-15 11:40:38.063125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.063982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.063996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.064009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.064023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.064036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.064050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.064063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.064077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.064090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.064104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.064117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.064130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.064143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.064157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.064170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.064183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.064196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.064210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.652 [2024-11-15 11:40:38.064223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.652 [2024-11-15 11:40:38.064241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.653 [2024-11-15 11:40:38.064720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.653 [2024-11-15 11:40:38.064748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.653 [2024-11-15 11:40:38.064774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.653 [2024-11-15 11:40:38.064802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.653 [2024-11-15 11:40:38.064828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.653 [2024-11-15 11:40:38.064855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.653 [2024-11-15 11:40:38.064882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.653 [2024-11-15 11:40:38.064909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.064976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.064992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.065007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.065020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.065034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.065047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.065061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.065074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.065088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.065101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.065115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.065128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.065142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.065155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.065169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.065183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.065197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.065210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.065224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.065238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.065252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.065281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.065296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.065318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.065350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.653 [2024-11-15 11:40:38.065364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.065382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.065398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.653 [2024-11-15 11:40:38.065414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.653 [2024-11-15 11:40:38.065429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.065983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.065998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.066011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.066025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.066039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.066053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.066066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.066080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.066093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.066108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.066125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.066140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.066153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.066168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.066181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.066196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.066210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.066224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.066237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.066252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.066265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.066279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.654 [2024-11-15 11:40:38.066299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.066353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.654 [2024-11-15 11:40:38.066372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79856 len:8 PRP1 0x0 PRP2 0x0 00:22:12.654 [2024-11-15 11:40:38.066385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.066403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.654 [2024-11-15 11:40:38.066416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.654 [2024-11-15 11:40:38.066427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79864 len:8 PRP1 0x0 PRP2 0x0 00:22:12.654 [2024-11-15 11:40:38.066440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.066454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.654 [2024-11-15 11:40:38.066465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.654 [2024-11-15 11:40:38.066476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79872 len:8 PRP1 0x0 PRP2 0x0 00:22:12.654 [2024-11-15 11:40:38.066489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.066502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.654 [2024-11-15 11:40:38.066513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.654 [2024-11-15 11:40:38.066524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79880 len:8 PRP1 0x0 PRP2 0x0 00:22:12.654 [2024-11-15 11:40:38.066540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.066555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.654 [2024-11-15 11:40:38.066566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.654 [2024-11-15 11:40:38.066578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79888 len:8 PRP1 0x0 PRP2 0x0 00:22:12.654 [2024-11-15 11:40:38.066590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.654 [2024-11-15 11:40:38.066603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.654 [2024-11-15 11:40:38.066637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.654 [2024-11-15 11:40:38.066648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79896 len:8 PRP1 0x0 PRP2 0x0 00:22:12.655 [2024-11-15 11:40:38.066663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.066676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.655 [2024-11-15 11:40:38.066687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.655 [2024-11-15 11:40:38.066699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78960 len:8 PRP1 0x0 PRP2 0x0 00:22:12.655 [2024-11-15 11:40:38.066711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.066724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.655 [2024-11-15 11:40:38.066735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.655 [2024-11-15 11:40:38.066746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78968 len:8 PRP1 0x0 PRP2 0x0 00:22:12.655 [2024-11-15 11:40:38.066758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.066771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.655 [2024-11-15 11:40:38.066782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.655 [2024-11-15 11:40:38.066793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78976 len:8 PRP1 0x0 PRP2 0x0 00:22:12.655 [2024-11-15 11:40:38.066806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.066819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.655 [2024-11-15 11:40:38.066830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.655 [2024-11-15 11:40:38.066841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78984 len:8 PRP1 0x0 PRP2 0x0 00:22:12.655 [2024-11-15 11:40:38.066853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.066865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.655 [2024-11-15 11:40:38.066876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.655 [2024-11-15 11:40:38.066887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78992 len:8 PRP1 0x0 PRP2 0x0 00:22:12.655 [2024-11-15 11:40:38.066900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.066913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.655 [2024-11-15 11:40:38.066923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.655 [2024-11-15 11:40:38.066939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79000 len:8 PRP1 0x0 PRP2 0x0 00:22:12.655 [2024-11-15 11:40:38.066953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.066967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.655 [2024-11-15 11:40:38.066978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.655 [2024-11-15 11:40:38.066989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79008 len:8 PRP1 0x0 PRP2 0x0 00:22:12.655 [2024-11-15 11:40:38.067002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.067015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.655 [2024-11-15 11:40:38.067026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.655 [2024-11-15 11:40:38.067036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79904 len:8 PRP1 0x0 PRP2 0x0 00:22:12.655 [2024-11-15 11:40:38.067049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.067061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.655 [2024-11-15 11:40:38.067072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.655 [2024-11-15 11:40:38.067084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79016 len:8 PRP1 0x0 PRP2 0x0 00:22:12.655 [2024-11-15 11:40:38.067096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.067109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.655 [2024-11-15 11:40:38.067120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.655 [2024-11-15 11:40:38.067131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79024 len:8 PRP1 0x0 PRP2 0x0 00:22:12.655 [2024-11-15 11:40:38.067143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.067157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.655 [2024-11-15 11:40:38.067167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.655 [2024-11-15 11:40:38.067179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79032 len:8 PRP1 0x0 PRP2 0x0 00:22:12.655 [2024-11-15 11:40:38.067192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.067205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.655 [2024-11-15 11:40:38.067216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.655 [2024-11-15 11:40:38.067227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79040 len:8 PRP1 0x0 PRP2 0x0 00:22:12.655 [2024-11-15 11:40:38.067240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.067252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.655 [2024-11-15 11:40:38.067263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.655 [2024-11-15 11:40:38.067274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79048 len:8 PRP1 0x0 PRP2 0x0 00:22:12.655 [2024-11-15 11:40:38.067286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.067299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.655 [2024-11-15 11:40:38.067342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.655 [2024-11-15 11:40:38.067355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79056 len:8 PRP1 0x0 PRP2 0x0 00:22:12.655 [2024-11-15 11:40:38.067368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.067382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.655 [2024-11-15 11:40:38.067394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.655 [2024-11-15 11:40:38.067406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79064 len:8 PRP1 0x0 PRP2 0x0 00:22:12.655 [2024-11-15 11:40:38.067418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.067432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.655 [2024-11-15 11:40:38.067443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.655 [2024-11-15 11:40:38.067454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79072 len:8 PRP1 0x0 PRP2 0x0 00:22:12.655 [2024-11-15 11:40:38.067468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.067537] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:12.655 [2024-11-15 11:40:38.067588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.655 [2024-11-15 11:40:38.067608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.067622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.655 [2024-11-15 11:40:38.067636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.067649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.655 [2024-11-15 11:40:38.067662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.067676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.655 [2024-11-15 11:40:38.067689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:38.067702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:12.655 [2024-11-15 11:40:38.070966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:12.655 [2024-11-15 11:40:38.071002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65560 (9): Bad file descriptor 00:22:12.655 [2024-11-15 11:40:38.108716] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:12.655 8455.00 IOPS, 33.03 MiB/s [2024-11-15T10:40:53.082Z] 8533.33 IOPS, 33.33 MiB/s [2024-11-15T10:40:53.082Z] 8556.50 IOPS, 33.42 MiB/s [2024-11-15T10:40:53.082Z] [2024-11-15 11:40:41.755401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.655 [2024-11-15 11:40:41.755445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:41.755471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.655 [2024-11-15 11:40:41.755499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:41.755517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.655 [2024-11-15 11:40:41.755532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:41.755548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.655 [2024-11-15 11:40:41.755563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.655 [2024-11-15 11:40:41.755579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.655 [2024-11-15 11:40:41.755594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.755625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.656 [2024-11-15 11:40:41.755640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.755665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.656 [2024-11-15 11:40:41.755679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.755693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.656 [2024-11-15 11:40:41.755706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.755721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.656 [2024-11-15 11:40:41.755734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.755749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.656 [2024-11-15 11:40:41.755762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.755777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.656 [2024-11-15 11:40:41.755791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.755805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.656 [2024-11-15 11:40:41.755819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.755834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.656 [2024-11-15 11:40:41.755847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.755861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.656 [2024-11-15 11:40:41.755875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.755890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.656 [2024-11-15 11:40:41.755909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.755924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.656 [2024-11-15 11:40:41.755938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.755953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.656 [2024-11-15 11:40:41.755968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.755982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.656 [2024-11-15 11:40:41.755996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.656 [2024-11-15 11:40:41.756024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.656 [2024-11-15 11:40:41.756574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.656 [2024-11-15 11:40:41.756590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.756603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.756618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.756646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.756661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.756678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.756693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.756706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.756721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.756734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.756749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.756762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.756776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.756790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.756805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.756818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.756832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.756845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.756860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.756873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.756887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.756901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.756915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.756929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.756943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.756957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.756971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.756984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.756999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.657 [2024-11-15 11:40:41.757180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.657 [2024-11-15 11:40:41.757718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.657 [2024-11-15 11:40:41.757731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.757745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.658 [2024-11-15 11:40:41.757758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.757772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.658 [2024-11-15 11:40:41.757789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.757804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.658 [2024-11-15 11:40:41.757818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.757832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.658 [2024-11-15 11:40:41.757845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.757860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.658 [2024-11-15 11:40:41.757880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.757895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.658 [2024-11-15 11:40:41.757908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.757922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.658 [2024-11-15 11:40:41.757935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.757950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.658 [2024-11-15 11:40:41.757964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.757978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.658 [2024-11-15 11:40:41.757992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.658 [2024-11-15 11:40:41.758021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.658 [2024-11-15 11:40:41.758048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.658 [2024-11-15 11:40:41.758081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.658 [2024-11-15 11:40:41.758109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.658 [2024-11-15 11:40:41.758138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.658 [2024-11-15 11:40:41.758192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83360 len:8 PRP1 0x0 PRP2 0x0 00:22:12.658 [2024-11-15 11:40:41.758206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.658 [2024-11-15 11:40:41.758236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.658 [2024-11-15 11:40:41.758248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83368 len:8 PRP1 0x0 PRP2 0x0 00:22:12.658 [2024-11-15 11:40:41.758261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.658 [2024-11-15 11:40:41.758300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.658 [2024-11-15 11:40:41.758322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83376 len:8 PRP1 0x0 PRP2 0x0 00:22:12.658 [2024-11-15 11:40:41.758335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.658 [2024-11-15 11:40:41.758362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.658 [2024-11-15 11:40:41.758374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83384 len:8 PRP1 0x0 PRP2 0x0 00:22:12.658 [2024-11-15 11:40:41.758389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.658 [2024-11-15 11:40:41.758415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.658 [2024-11-15 11:40:41.758428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83392 len:8 PRP1 0x0 PRP2 0x0 00:22:12.658 [2024-11-15 11:40:41.758441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.658 [2024-11-15 11:40:41.758468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.658 [2024-11-15 11:40:41.758479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83400 len:8 PRP1 0x0 PRP2 0x0 00:22:12.658 [2024-11-15 11:40:41.758493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.658 [2024-11-15 11:40:41.758519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.658 [2024-11-15 11:40:41.758531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83408 len:8 PRP1 0x0 PRP2 0x0 00:22:12.658 [2024-11-15 11:40:41.758545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.658 [2024-11-15 11:40:41.758571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.658 [2024-11-15 11:40:41.758583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83416 len:8 PRP1 0x0 PRP2 0x0 00:22:12.658 [2024-11-15 11:40:41.758611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.658 [2024-11-15 11:40:41.758643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.658 [2024-11-15 11:40:41.758655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83424 len:8 PRP1 0x0 PRP2 0x0 00:22:12.658 [2024-11-15 11:40:41.758668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.658 [2024-11-15 11:40:41.758701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.658 [2024-11-15 11:40:41.758718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83432 len:8 PRP1 0x0 PRP2 0x0 00:22:12.658 [2024-11-15 11:40:41.758732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.658 [2024-11-15 11:40:41.758762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.658 [2024-11-15 11:40:41.758774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83440 len:8 PRP1 0x0 PRP2 0x0 00:22:12.658 [2024-11-15 11:40:41.758786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.658 [2024-11-15 11:40:41.758822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.658 [2024-11-15 11:40:41.758833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83448 len:8 PRP1 0x0 PRP2 0x0 00:22:12.658 [2024-11-15 11:40:41.758846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.658 [2024-11-15 11:40:41.758858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.658 [2024-11-15 11:40:41.758870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.758881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83456 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.758893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.758922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.758934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.758944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83464 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.758958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.758972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.758984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.758996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83472 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83480 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83488 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83496 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83504 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83512 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83520 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83528 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83536 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83544 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83552 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83560 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83568 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83576 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83584 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83592 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83600 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83608 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.759956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.759972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.759985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83616 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.759998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.760012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.760023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.760035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83624 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.760047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.659 [2024-11-15 11:40:41.760061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.659 [2024-11-15 11:40:41.760072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.659 [2024-11-15 11:40:41.760083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83632 len:8 PRP1 0x0 PRP2 0x0 00:22:12.659 [2024-11-15 11:40:41.760097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:41.760122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.660 [2024-11-15 11:40:41.760133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.660 [2024-11-15 11:40:41.760144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83640 len:8 PRP1 0x0 PRP2 0x0 00:22:12.660 [2024-11-15 11:40:41.760157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:41.760171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.660 [2024-11-15 11:40:41.760187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.660 [2024-11-15 11:40:41.760200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83648 len:8 PRP1 0x0 PRP2 0x0 00:22:12.660 [2024-11-15 11:40:41.760212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:41.760226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.660 [2024-11-15 11:40:41.760238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.660 [2024-11-15 11:40:41.760249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83656 len:8 PRP1 0x0 PRP2 0x0 00:22:12.660 [2024-11-15 11:40:41.760262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:41.760342] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:12.660 [2024-11-15 11:40:41.760384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.660 [2024-11-15 11:40:41.760404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:41.760428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.660 [2024-11-15 11:40:41.760443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:41.760457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.660 [2024-11-15 11:40:41.760471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:41.760486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.660 [2024-11-15 11:40:41.760500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:41.760518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:12.660 [2024-11-15 11:40:41.760573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65560 (9): Bad file descriptor 00:22:12.660 [2024-11-15 11:40:41.763912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:12.660 [2024-11-15 11:40:41.874472] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:12.660 8362.00 IOPS, 32.66 MiB/s [2024-11-15T10:40:53.087Z] 8388.17 IOPS, 32.77 MiB/s [2024-11-15T10:40:53.087Z] 8428.86 IOPS, 32.93 MiB/s [2024-11-15T10:40:53.087Z] 8455.00 IOPS, 33.03 MiB/s [2024-11-15T10:40:53.087Z] 8479.00 IOPS, 33.12 MiB/s [2024-11-15T10:40:53.087Z] [2024-11-15 11:40:46.406711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.406771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.406797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.406828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.406845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.406859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.406874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.406904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.406920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.406933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.406948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.406962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.406977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.406991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.407029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.407059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.407088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.407115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.407142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.407169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.407196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.407224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.407250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.407277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.407340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.407368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.407398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.407431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.407459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.407488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.660 [2024-11-15 11:40:46.407516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.660 [2024-11-15 11:40:46.407530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.407544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.407559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.407572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.407602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.407616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.407630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.407643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.407658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.407671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.407685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.407697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.407712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.407725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.407739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.407752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.407766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.407784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.407799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.407812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.407826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.407840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.407854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.407867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.407881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.407895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.407908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.407921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.407936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.407949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.407963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.407976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.407990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.661 [2024-11-15 11:40:46.408523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.661 [2024-11-15 11:40:46.408543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.662 [2024-11-15 11:40:46.408557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.408572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.662 [2024-11-15 11:40:46.408586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.408601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.662 [2024-11-15 11:40:46.408615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.408647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.662 [2024-11-15 11:40:46.408660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.408674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.662 [2024-11-15 11:40:46.408688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.408702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.408716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.408730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.408744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.408759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.408772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.408787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.408801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.408816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.408829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.408843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.408857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.408871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.408884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.408898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.662 [2024-11-15 11:40:46.408915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.408931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.408944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.408959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.408972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.408986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.662 [2024-11-15 11:40:46.409677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.662 [2024-11-15 11:40:46.409691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.409709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.409724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.409738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.409753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.409766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.409780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.409794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.409808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.409822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.409836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.409850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.409865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.409879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.409893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.409907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.409921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.409934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.409949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.409963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.409978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.409992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.410020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.410048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.410080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.410110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.410138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.663 [2024-11-15 11:40:46.410166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.663 [2024-11-15 11:40:46.410194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.663 [2024-11-15 11:40:46.410223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.663 [2024-11-15 11:40:46.410252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.663 [2024-11-15 11:40:46.410280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.663 [2024-11-15 11:40:46.410334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.663 [2024-11-15 11:40:46.410373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.663 [2024-11-15 11:40:46.410402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.663 [2024-11-15 11:40:46.410432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.410460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.410494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.410523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.410552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.410591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.410619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.663 [2024-11-15 11:40:46.410663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.663 [2024-11-15 11:40:46.410706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.663 [2024-11-15 11:40:46.410718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38752 len:8 PRP1 0x0 PRP2 0x0 00:22:12.663 [2024-11-15 11:40:46.410731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410794] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:12.663 [2024-11-15 11:40:46.410831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.663 [2024-11-15 11:40:46.410865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.663 [2024-11-15 11:40:46.410894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.663 [2024-11-15 11:40:46.410921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.663 [2024-11-15 11:40:46.410948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.663 [2024-11-15 11:40:46.410962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:12.663 [2024-11-15 11:40:46.411019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e65560 (9): Bad file descriptor 00:22:12.663 [2024-11-15 11:40:46.414291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:12.663 [2024-11-15 11:40:46.481465] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:12.664 8436.60 IOPS, 32.96 MiB/s [2024-11-15T10:40:53.091Z] 8452.73 IOPS, 33.02 MiB/s [2024-11-15T10:40:53.091Z] 8469.75 IOPS, 33.08 MiB/s [2024-11-15T10:40:53.091Z] 8485.15 IOPS, 33.15 MiB/s [2024-11-15T10:40:53.091Z] 8501.57 IOPS, 33.21 MiB/s [2024-11-15T10:40:53.091Z] 8507.93 IOPS, 33.23 MiB/s 00:22:12.664 Latency(us) 00:22:12.664 [2024-11-15T10:40:53.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.664 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:12.664 Verification LBA range: start 0x0 length 0x4000 00:22:12.664 NVMe0n1 : 15.05 8482.34 33.13 607.03 0.00 14017.27 552.20 41748.86 00:22:12.664 [2024-11-15T10:40:53.091Z] =================================================================================================================== 00:22:12.664 [2024-11-15T10:40:53.091Z] Total : 8482.34 33.13 607.03 0.00 14017.27 552.20 41748.86 00:22:12.664 Received shutdown signal, test time was about 15.000000 seconds 00:22:12.664 00:22:12.664 Latency(us) 00:22:12.664 [2024-11-15T10:40:53.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.664 [2024-11-15T10:40:53.091Z] =================================================================================================================== 00:22:12.664 [2024-11-15T10:40:53.091Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:12.664 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:12.664 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:12.664 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:12.664 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2999179 00:22:12.664 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:12.664 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2999179 /var/tmp/bdevperf.sock 00:22:12.664 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2999179 ']' 00:22:12.664 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.664 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.664 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.664 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.664 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:12.664 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.664 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:12.664 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:12.664 [2024-11-15 11:40:52.825339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:12.664 11:40:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:12.922 [2024-11-15 11:40:53.086055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:12.922 11:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:13.179 NVMe0n1 00:22:13.179 11:40:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:13.744 00:22:13.744 11:40:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:14.309 00:22:14.309 11:40:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:14.310 11:40:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:14.567 11:40:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:14.824 11:40:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:18.104 11:40:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:18.104 11:40:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:18.104 11:40:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2999857 00:22:18.104 11:40:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:18.105 11:40:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2999857 00:22:19.477 { 00:22:19.477 "results": [ 00:22:19.477 { 00:22:19.477 "job": "NVMe0n1", 00:22:19.477 "core_mask": "0x1", 00:22:19.477 "workload": "verify", 00:22:19.477 "status": "finished", 00:22:19.477 "verify_range": { 00:22:19.477 "start": 0, 00:22:19.477 "length": 16384 00:22:19.477 }, 00:22:19.477 "queue_depth": 128, 00:22:19.477 "io_size": 4096, 00:22:19.477 "runtime": 1.04573, 00:22:19.477 "iops": 8287.033938014592, 00:22:19.477 "mibps": 32.3712263203695, 00:22:19.477 "io_failed": 0, 00:22:19.477 "io_timeout": 0, 00:22:19.477 "avg_latency_us": 14801.349798189603, 00:22:19.477 "min_latency_us": 2997.6651851851852, 00:22:19.477 "max_latency_us": 43496.485925925925 00:22:19.477 } 00:22:19.477 ], 00:22:19.477 "core_count": 1 00:22:19.477 } 00:22:19.477 11:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:19.477 [2024-11-15 11:40:52.328185] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:22:19.477 [2024-11-15 11:40:52.328298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2999179 ] 00:22:19.477 [2024-11-15 11:40:52.408210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.477 [2024-11-15 11:40:52.466902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.477 [2024-11-15 11:40:55.039948] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:19.477 [2024-11-15 11:40:55.040040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.477 [2024-11-15 11:40:55.040063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.477 [2024-11-15 11:40:55.040079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.478 [2024-11-15 11:40:55.040093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.478 [2024-11-15 11:40:55.040107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.478 [2024-11-15 11:40:55.040122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.478 [2024-11-15 11:40:55.040135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.478 [2024-11-15 11:40:55.040149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.478 [2024-11-15 11:40:55.040163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:19.478 [2024-11-15 11:40:55.040206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:19.478 [2024-11-15 11:40:55.040238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb4560 (9): Bad file descriptor 00:22:19.478 [2024-11-15 11:40:55.172410] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:19.478 Running I/O for 1 seconds... 00:22:19.478 8538.00 IOPS, 33.35 MiB/s 00:22:19.478 Latency(us) 00:22:19.478 [2024-11-15T10:40:59.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.478 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:19.478 Verification LBA range: start 0x0 length 0x4000 00:22:19.478 NVMe0n1 : 1.05 8287.03 32.37 0.00 0.00 14801.35 2997.67 43496.49 00:22:19.478 [2024-11-15T10:40:59.905Z] =================================================================================================================== 00:22:19.478 [2024-11-15T10:40:59.905Z] Total : 8287.03 32.37 0.00 0.00 14801.35 2997.67 43496.49 00:22:19.478 11:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:19.478 11:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:19.478 11:40:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:19.735 11:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:19.735 11:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:19.992 11:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:20.250 11:41:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:23.529 11:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:23.529 11:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:23.529 11:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2999179 00:22:23.529 11:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2999179 ']' 00:22:23.529 11:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2999179 00:22:23.529 11:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:23.529 11:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.529 11:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2999179 00:22:23.529 11:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:23.529 11:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:23.529 11:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2999179' 00:22:23.529 killing process with pid 2999179 00:22:23.529 11:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2999179 00:22:23.529 11:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2999179 00:22:23.787 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:23.788 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:24.353 rmmod nvme_tcp 00:22:24.353 rmmod nvme_fabrics 00:22:24.353 rmmod nvme_keyring 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2996899 ']' 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2996899 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2996899 ']' 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2996899 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2996899 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2996899' 00:22:24.353 killing process with pid 2996899 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2996899 00:22:24.353 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2996899 00:22:24.611 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:24.611 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:24.611 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:24.611 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:24.611 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:24.611 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:24.611 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:24.611 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:24.611 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:24.611 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.611 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.611 11:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.516 11:41:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:26.516 00:22:26.516 real 0m36.014s 00:22:26.516 user 2m7.089s 00:22:26.516 sys 0m5.917s 00:22:26.516 11:41:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.516 11:41:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:26.516 ************************************ 00:22:26.516 END TEST nvmf_failover 00:22:26.516 ************************************ 00:22:26.516 11:41:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:26.516 11:41:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:26.516 11:41:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.516 11:41:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.516 ************************************ 00:22:26.516 START TEST nvmf_host_discovery 00:22:26.516 ************************************ 00:22:26.516 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:26.775 * Looking for test storage... 00:22:26.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.775 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:26.775 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:22:26.775 11:41:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:26.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.775 --rc genhtml_branch_coverage=1 00:22:26.775 --rc genhtml_function_coverage=1 00:22:26.775 --rc genhtml_legend=1 00:22:26.775 --rc geninfo_all_blocks=1 00:22:26.775 --rc geninfo_unexecuted_blocks=1 00:22:26.775 00:22:26.775 ' 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:26.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.775 --rc genhtml_branch_coverage=1 00:22:26.775 --rc genhtml_function_coverage=1 00:22:26.775 --rc genhtml_legend=1 00:22:26.775 --rc geninfo_all_blocks=1 00:22:26.775 --rc geninfo_unexecuted_blocks=1 00:22:26.775 00:22:26.775 ' 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:26.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.775 --rc genhtml_branch_coverage=1 00:22:26.775 --rc genhtml_function_coverage=1 00:22:26.775 --rc genhtml_legend=1 00:22:26.775 --rc geninfo_all_blocks=1 00:22:26.775 --rc geninfo_unexecuted_blocks=1 00:22:26.775 00:22:26.775 ' 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:26.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.775 --rc genhtml_branch_coverage=1 00:22:26.775 --rc genhtml_function_coverage=1 00:22:26.775 --rc genhtml_legend=1 00:22:26.775 --rc geninfo_all_blocks=1 00:22:26.775 --rc geninfo_unexecuted_blocks=1 00:22:26.775 00:22:26.775 ' 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.775 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.776 11:41:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.678 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.678 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:22:28.678 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:28.678 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:28.678 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:28.937 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:28.937 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:28.938 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:28.938 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:28.938 Found net devices under 0000:09:00.0: cvl_0_0 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:28.938 Found net devices under 0000:09:00.1: cvl_0_1 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:28.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:22:28.938 00:22:28.938 --- 10.0.0.2 ping statistics --- 00:22:28.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.938 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:22:28.938 00:22:28.938 --- 10.0.0.1 ping statistics --- 00:22:28.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.938 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:28.938 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:28.939 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.939 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.939 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3002583 00:22:28.939 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:28.939 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3002583 00:22:28.939 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3002583 ']' 00:22:28.939 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.939 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.939 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.939 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.939 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.939 [2024-11-15 11:41:09.325733] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:22:28.939 [2024-11-15 11:41:09.325821] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.197 [2024-11-15 11:41:09.398501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.197 [2024-11-15 11:41:09.457098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.197 [2024-11-15 11:41:09.457147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.197 [2024-11-15 11:41:09.457176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.197 [2024-11-15 11:41:09.457187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.197 [2024-11-15 11:41:09.457197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.197 [2024-11-15 11:41:09.457845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.197 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.197 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:29.197 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:29.197 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:29.197 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.197 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.197 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:29.197 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.197 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.197 [2024-11-15 11:41:09.606747] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.197 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.197 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:29.197 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.197 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.197 [2024-11-15 11:41:09.614941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:29.197 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.197 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:29.197 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.197 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.456 null0 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.456 null1 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3002608 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3002608 /tmp/host.sock 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3002608 ']' 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:29.456 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.456 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.456 [2024-11-15 11:41:09.688057] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:22:29.456 [2024-11-15 11:41:09.688137] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3002608 ] 00:22:29.456 [2024-11-15 11:41:09.753011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.456 [2024-11-15 11:41:09.810752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:29.714 11:41:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.714 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:29.714 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:29.714 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.714 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:29.714 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.714 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.714 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:29.714 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:29.715 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.973 [2024-11-15 11:41:10.264690] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.973 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:29.974 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:30.232 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.232 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:22:30.232 11:41:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:30.797 [2024-11-15 11:41:11.034468] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:30.797 [2024-11-15 11:41:11.034489] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:30.797 [2024-11-15 11:41:11.034511] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:30.797 [2024-11-15 11:41:11.120831] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:30.797 [2024-11-15 11:41:11.182555] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:22:30.797 [2024-11-15 11:41:11.183510] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d2cf80:1 started. 00:22:30.797 [2024-11-15 11:41:11.185200] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:30.797 [2024-11-15 11:41:11.185219] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:30.797 [2024-11-15 11:41:11.192696] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d2cf80 was disconnected and freed. delete nvme_qpair. 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.055 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.313 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:31.572 [2024-11-15 11:41:11.780366] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d2d660:1 started. 00:22:31.572 [2024-11-15 11:41:11.783911] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d2d660 was disconnected and freed. delete nvme_qpair. 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.572 [2024-11-15 11:41:11.849163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:31.572 [2024-11-15 11:41:11.850089] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:31.572 [2024-11-15 11:41:11.850115] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.572 [2024-11-15 11:41:11.936668] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:31.572 11:41:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:31.830 [2024-11-15 11:41:11.999445] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:22:31.830 [2024-11-15 11:41:11.999491] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:31.830 [2024-11-15 11:41:11.999512] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:31.830 [2024-11-15 11:41:11.999522] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:32.766 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:32.766 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:32.766 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:32.766 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:32.766 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:32.766 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.766 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.766 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:32.766 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:32.766 11:41:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.766 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.766 [2024-11-15 11:41:13.081741] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:32.766 [2024-11-15 11:41:13.081784] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:32.766 [2024-11-15 11:41:13.085009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.766 [2024-11-15 11:41:13.085043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.767 [2024-11-15 11:41:13.085074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.767 [2024-11-15 11:41:13.085088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.767 [2024-11-15 11:41:13.085102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.767 [2024-11-15 11:41:13.085115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.767 [2024-11-15 11:41:13.085130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.767 [2024-11-15 11:41:13.085143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.767 [2024-11-15 11:41:13.085156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfd550 is same with the state(6) to be set 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:32.767 [2024-11-15 11:41:13.095000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfd550 (9): Bad file descriptor 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.767 [2024-11-15 11:41:13.105041] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.767 [2024-11-15 11:41:13.105062] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.767 [2024-11-15 11:41:13.105071] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.767 [2024-11-15 11:41:13.105078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.767 [2024-11-15 11:41:13.105123] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.767 [2024-11-15 11:41:13.105387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.767 [2024-11-15 11:41:13.105422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cfd550 with addr=10.0.0.2, port=4420 00:22:32.767 [2024-11-15 11:41:13.105439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfd550 is same with the state(6) to be set 00:22:32.767 [2024-11-15 11:41:13.105463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfd550 (9): Bad file descriptor 00:22:32.767 [2024-11-15 11:41:13.105497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.767 [2024-11-15 11:41:13.105514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.767 [2024-11-15 11:41:13.105530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.767 [2024-11-15 11:41:13.105542] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.767 [2024-11-15 11:41:13.105552] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.767 [2024-11-15 11:41:13.105559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.767 [2024-11-15 11:41:13.115171] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.767 [2024-11-15 11:41:13.115190] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.767 [2024-11-15 11:41:13.115199] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.767 [2024-11-15 11:41:13.115205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.767 [2024-11-15 11:41:13.115243] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.767 [2024-11-15 11:41:13.115453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.767 [2024-11-15 11:41:13.115481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cfd550 with addr=10.0.0.2, port=4420 00:22:32.767 [2024-11-15 11:41:13.115497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfd550 is same with the state(6) to be set 00:22:32.767 [2024-11-15 11:41:13.115520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfd550 (9): Bad file descriptor 00:22:32.767 [2024-11-15 11:41:13.115540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.767 [2024-11-15 11:41:13.115553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.767 [2024-11-15 11:41:13.115566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.767 [2024-11-15 11:41:13.115578] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.767 [2024-11-15 11:41:13.115587] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.767 [2024-11-15 11:41:13.115594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.767 [2024-11-15 11:41:13.125291] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.767 [2024-11-15 11:41:13.125319] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.767 [2024-11-15 11:41:13.125329] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.767 [2024-11-15 11:41:13.125336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.767 [2024-11-15 11:41:13.125375] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.767 [2024-11-15 11:41:13.125600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.767 [2024-11-15 11:41:13.125627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cfd550 with addr=10.0.0.2, port=4420 00:22:32.767 [2024-11-15 11:41:13.125643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfd550 is same with the state(6) to be set 00:22:32.767 [2024-11-15 11:41:13.125665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfd550 (9): Bad file descriptor 00:22:32.767 [2024-11-15 11:41:13.125698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.767 [2024-11-15 11:41:13.125715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.767 [2024-11-15 11:41:13.125728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.767 [2024-11-15 11:41:13.125740] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.767 [2024-11-15 11:41:13.125749] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.767 [2024-11-15 11:41:13.125757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:32.767 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:32.767 [2024-11-15 11:41:13.135409] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.767 [2024-11-15 11:41:13.135433] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.767 [2024-11-15 11:41:13.135443] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.767 [2024-11-15 11:41:13.135450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.767 [2024-11-15 11:41:13.135475] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.767 [2024-11-15 11:41:13.135574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.767 [2024-11-15 11:41:13.135601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cfd550 with addr=10.0.0.2, port=4420 00:22:32.768 [2024-11-15 11:41:13.135617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfd550 is same with the state(6) to be set 00:22:32.768 [2024-11-15 11:41:13.135645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfd550 (9): Bad file descriptor 00:22:32.768 [2024-11-15 11:41:13.135666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.768 [2024-11-15 11:41:13.135679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.768 [2024-11-15 11:41:13.135691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.768 [2024-11-15 11:41:13.135703] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.768 [2024-11-15 11:41:13.135711] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.768 [2024-11-15 11:41:13.135719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.768 [2024-11-15 11:41:13.145509] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.768 [2024-11-15 11:41:13.145533] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.768 [2024-11-15 11:41:13.145543] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.768 [2024-11-15 11:41:13.145551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.768 [2024-11-15 11:41:13.145577] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.768 [2024-11-15 11:41:13.145756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.768 [2024-11-15 11:41:13.145784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cfd550 with addr=10.0.0.2, port=4420 00:22:32.768 [2024-11-15 11:41:13.145800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfd550 is same with the state(6) to be set 00:22:32.768 [2024-11-15 11:41:13.145822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfd550 (9): Bad file descriptor 00:22:32.768 [2024-11-15 11:41:13.145854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.768 [2024-11-15 11:41:13.145870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.768 [2024-11-15 11:41:13.145884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.768 [2024-11-15 11:41:13.145896] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.768 [2024-11-15 11:41:13.145905] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.768 [2024-11-15 11:41:13.145913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.768 [2024-11-15 11:41:13.155611] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.768 [2024-11-15 11:41:13.155631] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.768 [2024-11-15 11:41:13.155640] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.768 [2024-11-15 11:41:13.155662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.768 [2024-11-15 11:41:13.155685] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.768 [2024-11-15 11:41:13.155891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.768 [2024-11-15 11:41:13.155917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cfd550 with addr=10.0.0.2, port=4420 00:22:32.768 [2024-11-15 11:41:13.155933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfd550 is same with the state(6) to be set 00:22:32.768 [2024-11-15 11:41:13.155960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfd550 (9): Bad file descriptor 00:22:32.768 [2024-11-15 11:41:13.155981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.768 [2024-11-15 11:41:13.155994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.768 [2024-11-15 11:41:13.156007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.768 [2024-11-15 11:41:13.156018] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.768 [2024-11-15 11:41:13.156027] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.768 [2024-11-15 11:41:13.156035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.768 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.768 [2024-11-15 11:41:13.165733] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.768 [2024-11-15 11:41:13.165754] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.768 [2024-11-15 11:41:13.165763] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.768 [2024-11-15 11:41:13.165770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.768 [2024-11-15 11:41:13.165806] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.768 [2024-11-15 11:41:13.165933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.768 [2024-11-15 11:41:13.165959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cfd550 with addr=10.0.0.2, port=4420 00:22:32.768 [2024-11-15 11:41:13.165974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfd550 is same with the state(6) to be set 00:22:32.768 [2024-11-15 11:41:13.165996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfd550 (9): Bad file descriptor 00:22:32.768 [2024-11-15 11:41:13.166026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.768 [2024-11-15 11:41:13.166043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.768 [2024-11-15 11:41:13.166055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.768 [2024-11-15 11:41:13.166066] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.768 [2024-11-15 11:41:13.166075] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.768 [2024-11-15 11:41:13.166082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.768 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:32.768 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:32.768 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:32.768 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:32.768 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:32.768 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:32.768 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:32.768 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:32.768 [2024-11-15 11:41:13.175839] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.768 [2024-11-15 11:41:13.175860] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.768 [2024-11-15 11:41:13.175868] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.768 [2024-11-15 11:41:13.175875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.768 [2024-11-15 11:41:13.175913] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.768 [2024-11-15 11:41:13.176144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.768 [2024-11-15 11:41:13.176172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cfd550 with addr=10.0.0.2, port=4420 00:22:32.768 [2024-11-15 11:41:13.176188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfd550 is same with the state(6) to be set 00:22:32.768 [2024-11-15 11:41:13.176210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfd550 (9): Bad file descriptor 00:22:32.768 [2024-11-15 11:41:13.176230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.768 [2024-11-15 11:41:13.176243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.768 [2024-11-15 11:41:13.176256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.768 [2024-11-15 11:41:13.176267] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.768 [2024-11-15 11:41:13.176276] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.768 [2024-11-15 11:41:13.176284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.768 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:32.768 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:32.768 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.768 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.768 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:32.768 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:32.768 [2024-11-15 11:41:13.185947] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:32.768 [2024-11-15 11:41:13.185971] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:32.768 [2024-11-15 11:41:13.185980] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:32.768 [2024-11-15 11:41:13.185988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:32.768 [2024-11-15 11:41:13.186014] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:32.769 [2024-11-15 11:41:13.186157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.769 [2024-11-15 11:41:13.186185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cfd550 with addr=10.0.0.2, port=4420 00:22:32.769 [2024-11-15 11:41:13.186201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfd550 is same with the state(6) to be set 00:22:32.769 [2024-11-15 11:41:13.186228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfd550 (9): Bad file descriptor 00:22:32.769 [2024-11-15 11:41:13.186275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:32.769 [2024-11-15 11:41:13.186293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:32.769 [2024-11-15 11:41:13.186315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:32.769 [2024-11-15 11:41:13.186328] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:32.769 [2024-11-15 11:41:13.186337] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:32.769 [2024-11-15 11:41:13.186345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:32.769 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.027 [2024-11-15 11:41:13.196049] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:33.027 [2024-11-15 11:41:13.196071] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:33.027 [2024-11-15 11:41:13.196081] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:33.027 [2024-11-15 11:41:13.196089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:33.027 [2024-11-15 11:41:13.196113] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:33.027 [2024-11-15 11:41:13.196240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.027 [2024-11-15 11:41:13.196267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cfd550 with addr=10.0.0.2, port=4420 00:22:33.027 [2024-11-15 11:41:13.196283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfd550 is same with the state(6) to be set 00:22:33.027 [2024-11-15 11:41:13.196313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfd550 (9): Bad file descriptor 00:22:33.027 [2024-11-15 11:41:13.196336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:33.027 [2024-11-15 11:41:13.196349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:33.027 [2024-11-15 11:41:13.196362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:33.027 [2024-11-15 11:41:13.196373] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:33.028 [2024-11-15 11:41:13.196382] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:33.028 [2024-11-15 11:41:13.196390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:33.028 [2024-11-15 11:41:13.206146] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:33.028 [2024-11-15 11:41:13.206166] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:33.028 [2024-11-15 11:41:13.206174] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:33.028 [2024-11-15 11:41:13.206181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:33.028 [2024-11-15 11:41:13.206217] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:33.028 [2024-11-15 11:41:13.206352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.028 [2024-11-15 11:41:13.206380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cfd550 with addr=10.0.0.2, port=4420 00:22:33.028 [2024-11-15 11:41:13.206401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfd550 is same with the state(6) to be set 00:22:33.028 [2024-11-15 11:41:13.206423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfd550 (9): Bad file descriptor 00:22:33.028 [2024-11-15 11:41:13.206455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:33.028 [2024-11-15 11:41:13.206472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:33.028 [2024-11-15 11:41:13.206485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:33.028 [2024-11-15 11:41:13.206497] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:33.028 [2024-11-15 11:41:13.206505] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:33.028 [2024-11-15 11:41:13.206513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:33.028 [2024-11-15 11:41:13.207759] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:33.028 [2024-11-15 11:41:13.207784] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:33.028 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:22:33.028 11:41:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:33.962 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:34.220 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:34.221 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:34.221 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.221 11:41:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.154 [2024-11-15 11:41:15.515953] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:35.154 [2024-11-15 11:41:15.515981] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:35.154 [2024-11-15 11:41:15.516003] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:35.412 [2024-11-15 11:41:15.644435] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:35.671 [2024-11-15 11:41:15.910836] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:22:35.671 [2024-11-15 11:41:15.911718] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1d137b0:1 started. 00:22:35.671 [2024-11-15 11:41:15.913891] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:35.671 [2024-11-15 11:41:15.913933] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:35.671 [2024-11-15 11:41:15.915556] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1d137b0 was disconnected and freed. delete nvme_qpair. 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.671 request: 00:22:35.671 { 00:22:35.671 "name": "nvme", 00:22:35.671 "trtype": "tcp", 00:22:35.671 "traddr": "10.0.0.2", 00:22:35.671 "adrfam": "ipv4", 00:22:35.671 "trsvcid": "8009", 00:22:35.671 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:35.671 "wait_for_attach": true, 00:22:35.671 "method": "bdev_nvme_start_discovery", 00:22:35.671 "req_id": 1 00:22:35.671 } 00:22:35.671 Got JSON-RPC error response 00:22:35.671 response: 00:22:35.671 { 00:22:35.671 "code": -17, 00:22:35.671 "message": "File exists" 00:22:35.671 } 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:35.671 11:41:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.671 request: 00:22:35.671 { 00:22:35.671 "name": "nvme_second", 00:22:35.671 "trtype": "tcp", 00:22:35.671 "traddr": "10.0.0.2", 00:22:35.671 "adrfam": "ipv4", 00:22:35.671 "trsvcid": "8009", 00:22:35.671 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:35.671 "wait_for_attach": true, 00:22:35.671 "method": "bdev_nvme_start_discovery", 00:22:35.671 "req_id": 1 00:22:35.671 } 00:22:35.671 Got JSON-RPC error response 00:22:35.671 response: 00:22:35.671 { 00:22:35.671 "code": -17, 00:22:35.671 "message": "File exists" 00:22:35.671 } 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:35.671 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.929 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:35.929 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:35.929 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:35.929 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:35.929 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:35.929 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.929 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:35.929 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.929 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:35.929 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.929 11:41:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.863 [2024-11-15 11:41:17.117345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.863 [2024-11-15 11:41:17.117408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2c8b0 with addr=10.0.0.2, port=8010 00:22:36.863 [2024-11-15 11:41:17.117438] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:36.863 [2024-11-15 11:41:17.117453] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:36.863 [2024-11-15 11:41:17.117466] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:37.808 [2024-11-15 11:41:18.119817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.808 [2024-11-15 11:41:18.119887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d38ff0 with addr=10.0.0.2, port=8010 00:22:37.808 [2024-11-15 11:41:18.119919] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:37.808 [2024-11-15 11:41:18.119934] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:37.808 [2024-11-15 11:41:18.119947] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:38.742 [2024-11-15 11:41:19.121990] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:38.742 request: 00:22:38.742 { 00:22:38.742 "name": "nvme_second", 00:22:38.742 "trtype": "tcp", 00:22:38.742 "traddr": "10.0.0.2", 00:22:38.742 "adrfam": "ipv4", 00:22:38.742 "trsvcid": "8010", 00:22:38.742 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:38.742 "wait_for_attach": false, 00:22:38.742 "attach_timeout_ms": 3000, 00:22:38.742 "method": "bdev_nvme_start_discovery", 00:22:38.742 "req_id": 1 00:22:38.742 } 00:22:38.742 Got JSON-RPC error response 00:22:38.742 response: 00:22:38.742 { 00:22:38.742 "code": -110, 00:22:38.742 "message": "Connection timed out" 00:22:38.742 } 00:22:38.742 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:38.742 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:38.742 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:38.742 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:38.742 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:38.742 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:38.742 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:38.742 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:38.742 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.742 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.742 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:38.742 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:38.742 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.742 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3002608 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:39.000 rmmod nvme_tcp 00:22:39.000 rmmod nvme_fabrics 00:22:39.000 rmmod nvme_keyring 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3002583 ']' 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3002583 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3002583 ']' 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3002583 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3002583 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3002583' 00:22:39.000 killing process with pid 3002583 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3002583 00:22:39.000 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3002583 00:22:39.260 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:39.260 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:39.260 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:39.260 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:22:39.260 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:22:39.260 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:22:39.261 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:39.261 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:39.261 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:39.261 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.261 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.261 11:41:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.166 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:41.166 00:22:41.166 real 0m14.650s 00:22:41.166 user 0m21.811s 00:22:41.166 sys 0m2.999s 00:22:41.166 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:41.166 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.166 ************************************ 00:22:41.166 END TEST nvmf_host_discovery 00:22:41.166 ************************************ 00:22:41.166 11:41:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:41.166 11:41:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:41.166 11:41:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:41.166 11:41:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.425 ************************************ 00:22:41.425 START TEST nvmf_host_multipath_status 00:22:41.425 ************************************ 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:41.425 * Looking for test storage... 00:22:41.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:41.425 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:41.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.425 --rc genhtml_branch_coverage=1 00:22:41.425 --rc genhtml_function_coverage=1 00:22:41.425 --rc genhtml_legend=1 00:22:41.425 --rc geninfo_all_blocks=1 00:22:41.426 --rc geninfo_unexecuted_blocks=1 00:22:41.426 00:22:41.426 ' 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:41.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.426 --rc genhtml_branch_coverage=1 00:22:41.426 --rc genhtml_function_coverage=1 00:22:41.426 --rc genhtml_legend=1 00:22:41.426 --rc geninfo_all_blocks=1 00:22:41.426 --rc geninfo_unexecuted_blocks=1 00:22:41.426 00:22:41.426 ' 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:41.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.426 --rc genhtml_branch_coverage=1 00:22:41.426 --rc genhtml_function_coverage=1 00:22:41.426 --rc genhtml_legend=1 00:22:41.426 --rc geninfo_all_blocks=1 00:22:41.426 --rc geninfo_unexecuted_blocks=1 00:22:41.426 00:22:41.426 ' 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:41.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.426 --rc genhtml_branch_coverage=1 00:22:41.426 --rc genhtml_function_coverage=1 00:22:41.426 --rc genhtml_legend=1 00:22:41.426 --rc geninfo_all_blocks=1 00:22:41.426 --rc geninfo_unexecuted_blocks=1 00:22:41.426 00:22:41.426 ' 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:41.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:22:41.426 11:41:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.049 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:44.050 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:44.050 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:44.050 Found net devices under 0000:09:00.0: cvl_0_0 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:44.050 Found net devices under 0000:09:00.1: cvl_0_1 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:44.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:22:44.050 00:22:44.050 --- 10.0.0.2 ping statistics --- 00:22:44.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.050 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:22:44.050 00:22:44.050 --- 10.0.0.1 ping statistics --- 00:22:44.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.050 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3005911 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3005911 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3005911 ']' 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.050 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:44.050 [2024-11-15 11:41:24.240774] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:22:44.050 [2024-11-15 11:41:24.240852] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.050 [2024-11-15 11:41:24.309403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:44.050 [2024-11-15 11:41:24.365065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.050 [2024-11-15 11:41:24.365134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.050 [2024-11-15 11:41:24.365162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.050 [2024-11-15 11:41:24.365173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.050 [2024-11-15 11:41:24.365182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.050 [2024-11-15 11:41:24.366742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.050 [2024-11-15 11:41:24.366748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.309 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.309 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:44.309 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:44.309 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:44.309 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:44.309 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.309 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3005911 00:22:44.310 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:44.566 [2024-11-15 11:41:24.817029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.566 11:41:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:44.824 Malloc0 00:22:44.824 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:45.082 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:45.340 11:41:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.598 [2024-11-15 11:41:25.991187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.598 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:46.165 [2024-11-15 11:41:26.308067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:46.165 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3006200 00:22:46.165 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:46.165 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:46.165 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3006200 /var/tmp/bdevperf.sock 00:22:46.165 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3006200 ']' 00:22:46.165 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.165 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.165 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.165 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.165 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:46.422 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.422 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:46.422 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:46.681 11:41:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:46.939 Nvme0n1 00:22:46.939 11:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:47.504 Nvme0n1 00:22:47.505 11:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:47.505 11:41:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:49.404 11:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:49.404 11:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:49.662 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:49.920 11:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:51.293 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:51.293 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:51.293 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.293 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:51.293 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.293 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:51.293 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.293 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:51.551 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:51.551 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:51.551 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.551 11:41:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:51.809 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.809 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:51.809 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.809 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:52.067 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.067 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:52.067 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.067 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:52.325 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.325 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:52.325 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.325 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:52.584 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.584 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:52.584 11:41:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:52.842 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:53.100 11:41:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:54.492 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:54.492 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:54.492 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.492 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:54.492 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:54.492 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:54.492 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.492 11:41:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:54.756 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.756 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:54.756 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.756 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:55.013 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.013 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:55.013 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.013 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:55.271 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.271 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:55.271 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.271 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:55.529 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.529 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:55.529 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.529 11:41:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:55.787 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.787 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:55.787 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:56.045 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:56.302 11:41:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:57.676 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:57.676 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:57.676 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.676 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:57.676 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:57.676 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:57.676 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.676 11:41:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:57.933 11:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:57.933 11:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:57.933 11:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.933 11:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:58.191 11:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.191 11:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:58.191 11:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.191 11:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:58.450 11:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.450 11:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:58.450 11:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.450 11:41:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:58.707 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.707 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:58.707 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.707 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:58.965 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.965 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:58.965 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:59.529 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:59.529 11:41:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:00.902 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:00.902 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:00.902 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.902 11:41:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:00.902 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.902 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:00.902 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.902 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:01.159 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:01.159 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:01.159 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.159 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:01.417 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.417 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:01.417 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.417 11:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:01.674 11:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.674 11:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:01.675 11:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.675 11:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:01.932 11:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.932 11:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:01.932 11:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.932 11:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:02.189 11:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:02.189 11:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:02.189 11:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:02.755 11:41:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:02.755 11:41:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:04.126 11:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:04.126 11:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:04.126 11:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.126 11:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:04.126 11:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:04.126 11:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:04.126 11:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.126 11:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:04.384 11:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:04.384 11:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:04.384 11:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.384 11:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:04.640 11:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.640 11:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:04.640 11:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.640 11:41:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:04.898 11:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.898 11:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:04.898 11:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.898 11:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:05.156 11:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:05.156 11:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:05.156 11:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.156 11:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:05.414 11:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:05.414 11:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:05.414 11:41:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:05.672 11:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:05.929 11:41:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:07.302 11:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:07.302 11:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:07.302 11:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.302 11:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:07.302 11:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:07.302 11:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:07.302 11:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.302 11:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:07.559 11:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.560 11:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:07.560 11:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.560 11:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:07.818 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.818 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:07.818 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.818 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:08.131 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.131 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:08.131 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.131 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:08.413 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:08.413 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:08.413 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.413 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:08.671 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.671 11:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:08.929 11:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:08.930 11:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:09.188 11:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:09.446 11:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:10.379 11:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:10.379 11:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:10.379 11:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.379 11:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:10.637 11:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.637 11:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:10.637 11:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.637 11:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:11.200 11:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.200 11:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:11.200 11:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.200 11:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:11.200 11:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.200 11:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:11.200 11:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.200 11:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:11.458 11:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.458 11:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:11.458 11:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.458 11:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:12.024 11:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.024 11:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:12.024 11:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.024 11:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:12.024 11:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.024 11:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:12.024 11:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:12.282 11:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:12.848 11:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:13.783 11:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:13.783 11:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:13.783 11:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.783 11:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:14.041 11:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:14.041 11:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:14.041 11:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.041 11:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:14.299 11:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.299 11:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:14.299 11:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.299 11:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:14.555 11:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.555 11:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:14.555 11:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.555 11:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:14.812 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.812 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:14.813 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.813 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:15.070 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.070 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:15.070 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.071 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:15.329 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.329 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:15.329 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:15.587 11:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:15.844 11:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:16.778 11:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:16.778 11:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:16.778 11:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.778 11:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:17.345 11:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.345 11:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:17.345 11:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.345 11:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:17.345 11:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.345 11:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:17.345 11:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.345 11:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:17.603 11:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.603 11:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:17.604 11:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.604 11:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:18.170 11:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.170 11:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:18.170 11:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.170 11:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:18.170 11:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.170 11:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:18.170 11:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.170 11:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:18.428 11:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.428 11:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:18.428 11:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:18.686 11:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:18.944 11:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:20.317 11:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:20.317 11:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:20.317 11:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.317 11:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:20.317 11:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.317 11:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:20.317 11:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.317 11:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:20.575 11:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:20.575 11:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:20.575 11:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.575 11:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:20.833 11:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.833 11:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:20.833 11:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.833 11:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:21.398 11:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.398 11:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:21.398 11:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.398 11:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:21.398 11:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.398 11:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:21.398 11:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.398 11:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:21.976 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:21.976 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3006200 00:23:21.976 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3006200 ']' 00:23:21.976 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3006200 00:23:21.976 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:21.976 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.976 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3006200 00:23:21.976 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:21.976 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:21.976 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3006200' 00:23:21.976 killing process with pid 3006200 00:23:21.976 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3006200 00:23:21.976 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3006200 00:23:21.976 { 00:23:21.976 "results": [ 00:23:21.976 { 00:23:21.976 "job": "Nvme0n1", 00:23:21.976 "core_mask": "0x4", 00:23:21.976 "workload": "verify", 00:23:21.976 "status": "terminated", 00:23:21.976 "verify_range": { 00:23:21.976 "start": 0, 00:23:21.976 "length": 16384 00:23:21.976 }, 00:23:21.976 "queue_depth": 128, 00:23:21.976 "io_size": 4096, 00:23:21.976 "runtime": 34.256935, 00:23:21.976 "iops": 8057.9888422592385, 00:23:21.976 "mibps": 31.47651891507515, 00:23:21.976 "io_failed": 0, 00:23:21.976 "io_timeout": 0, 00:23:21.976 "avg_latency_us": 15857.9528191389, 00:23:21.976 "min_latency_us": 388.36148148148146, 00:23:21.976 "max_latency_us": 4026531.84 00:23:21.976 } 00:23:21.976 ], 00:23:21.976 "core_count": 1 00:23:21.976 } 00:23:21.976 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3006200 00:23:21.976 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:21.976 [2024-11-15 11:41:26.371729] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:23:21.976 [2024-11-15 11:41:26.371817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3006200 ] 00:23:21.976 [2024-11-15 11:41:26.439370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.976 [2024-11-15 11:41:26.498080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:21.976 Running I/O for 90 seconds... 00:23:21.976 8644.00 IOPS, 33.77 MiB/s [2024-11-15T10:42:02.403Z] 8701.00 IOPS, 33.99 MiB/s [2024-11-15T10:42:02.403Z] 8678.67 IOPS, 33.90 MiB/s [2024-11-15T10:42:02.403Z] 8666.25 IOPS, 33.85 MiB/s [2024-11-15T10:42:02.403Z] 8621.20 IOPS, 33.68 MiB/s [2024-11-15T10:42:02.403Z] 8622.33 IOPS, 33.68 MiB/s [2024-11-15T10:42:02.403Z] 8603.00 IOPS, 33.61 MiB/s [2024-11-15T10:42:02.403Z] 8606.00 IOPS, 33.62 MiB/s [2024-11-15T10:42:02.403Z] 8595.67 IOPS, 33.58 MiB/s [2024-11-15T10:42:02.403Z] 8590.60 IOPS, 33.56 MiB/s [2024-11-15T10:42:02.403Z] 8594.73 IOPS, 33.57 MiB/s [2024-11-15T10:42:02.403Z] 8599.25 IOPS, 33.59 MiB/s [2024-11-15T10:42:02.403Z] 8603.00 IOPS, 33.61 MiB/s [2024-11-15T10:42:02.403Z] 8605.71 IOPS, 33.62 MiB/s [2024-11-15T10:42:02.403Z] [2024-11-15 11:41:42.849163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.976 [2024-11-15 11:41:42.849232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:21.976 [2024-11-15 11:41:42.849292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.976 [2024-11-15 11:41:42.849322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:21.976 [2024-11-15 11:41:42.849349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.976 [2024-11-15 11:41:42.849367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:21.976 [2024-11-15 11:41:42.849390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.976 [2024-11-15 11:41:42.849407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:21.976 [2024-11-15 11:41:42.849429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.976 [2024-11-15 11:41:42.849446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:21.976 [2024-11-15 11:41:42.849484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.976 [2024-11-15 11:41:42.849501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:21.976 [2024-11-15 11:41:42.849523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.976 [2024-11-15 11:41:42.849539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:21.976 [2024-11-15 11:41:42.849562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.976 [2024-11-15 11:41:42.849585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:21.976 [2024-11-15 11:41:42.851024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.976 [2024-11-15 11:41:42.851064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:21.976 [2024-11-15 11:41:42.851106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.976 [2024-11-15 11:41:42.851125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:21.976 [2024-11-15 11:41:42.851148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.976 [2024-11-15 11:41:42.851164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.977 [2024-11-15 11:41:42.851204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.977 [2024-11-15 11:41:42.851243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.977 [2024-11-15 11:41:42.851281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.977 [2024-11-15 11:41:42.851333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.851373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.851412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.851451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.851504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.851543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.851581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.851638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.851677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.851714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.851750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.851786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.851822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.851859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.851895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.851931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.977 [2024-11-15 11:41:42.851967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.851988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.852003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.852024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.852039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.852060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.852080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.852101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.852117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.852138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.852153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.852174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.852189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.852209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.852224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.852245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.852260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.852281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.852321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.852345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.852361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.852383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.852399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.852421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.977 [2024-11-15 11:41:42.852436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:21.977 [2024-11-15 11:41:42.852457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.852473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.852494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.852510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.852532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.852548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.852574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.852606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.852628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.852643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.852664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.852680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.852700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.852716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.852737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.852753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.852774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.852790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.852811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.852826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.852863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.852879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.852901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.852917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.852938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.852954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.852976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.852992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.978 [2024-11-15 11:41:42.853954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:21.978 [2024-11-15 11:41:42.853980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.853996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.979 [2024-11-15 11:41:42.854878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.854983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.854998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.855023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.855038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.855063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.855079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.855104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.855119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.855144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.855159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.855183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.855199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.855224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.855239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.855264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.855279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.855330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.855352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.855379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.855396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.855421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.855437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.855463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.855479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.855505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.855521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:21.979 [2024-11-15 11:41:42.855546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.979 [2024-11-15 11:41:42.855562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:42.855604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.980 [2024-11-15 11:41:42.855620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:42.855645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.980 [2024-11-15 11:41:42.855661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:42.855685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.980 [2024-11-15 11:41:42.855701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:42.855725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.980 [2024-11-15 11:41:42.855741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:42.855765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.980 [2024-11-15 11:41:42.855780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:42.855805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.980 [2024-11-15 11:41:42.855820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:42.855844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.980 [2024-11-15 11:41:42.855859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:42.855888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.980 [2024-11-15 11:41:42.855904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:42.855929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.980 [2024-11-15 11:41:42.855944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:42.855969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.980 [2024-11-15 11:41:42.855984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:42.856008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.980 [2024-11-15 11:41:42.856023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:42.856048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.980 [2024-11-15 11:41:42.856063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:42.856087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.980 [2024-11-15 11:41:42.856103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:42.856128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.980 [2024-11-15 11:41:42.856143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:42.856167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.980 [2024-11-15 11:41:42.856183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:42.856208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.980 [2024-11-15 11:41:42.856223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:21.980 8597.73 IOPS, 33.58 MiB/s [2024-11-15T10:42:02.407Z] 8060.38 IOPS, 31.49 MiB/s [2024-11-15T10:42:02.407Z] 7586.24 IOPS, 29.63 MiB/s [2024-11-15T10:42:02.407Z] 7164.78 IOPS, 27.99 MiB/s [2024-11-15T10:42:02.407Z] 6788.58 IOPS, 26.52 MiB/s [2024-11-15T10:42:02.407Z] 6873.30 IOPS, 26.85 MiB/s [2024-11-15T10:42:02.407Z] 6953.19 IOPS, 27.16 MiB/s [2024-11-15T10:42:02.407Z] 7065.00 IOPS, 27.60 MiB/s [2024-11-15T10:42:02.407Z] 7243.70 IOPS, 28.30 MiB/s [2024-11-15T10:42:02.407Z] 7426.50 IOPS, 29.01 MiB/s [2024-11-15T10:42:02.407Z] 7569.00 IOPS, 29.57 MiB/s [2024-11-15T10:42:02.407Z] 7611.73 IOPS, 29.73 MiB/s [2024-11-15T10:42:02.407Z] 7643.67 IOPS, 29.86 MiB/s [2024-11-15T10:42:02.407Z] 7677.14 IOPS, 29.99 MiB/s [2024-11-15T10:42:02.407Z] 7763.66 IOPS, 30.33 MiB/s [2024-11-15T10:42:02.407Z] 7877.33 IOPS, 30.77 MiB/s [2024-11-15T10:42:02.407Z] 7977.16 IOPS, 31.16 MiB/s [2024-11-15T10:42:02.407Z] [2024-11-15 11:41:59.351298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.980 [2024-11-15 11:41:59.351367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:59.351418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.980 [2024-11-15 11:41:59.351448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:59.351473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.980 [2024-11-15 11:41:59.351490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:59.351513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.980 [2024-11-15 11:41:59.351529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:59.351551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.980 [2024-11-15 11:41:59.351568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:59.351590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.980 [2024-11-15 11:41:59.351622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:59.351645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.980 [2024-11-15 11:41:59.351661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:59.351682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.980 [2024-11-15 11:41:59.351699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:59.351720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.980 [2024-11-15 11:41:59.351736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:59.351757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.980 [2024-11-15 11:41:59.351773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:59.351795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.980 [2024-11-15 11:41:59.351810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:21.980 [2024-11-15 11:41:59.351831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.351847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.351869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.981 [2024-11-15 11:41:59.351885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.351906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.351926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.351948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.351964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.351985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.352001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.352022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.352038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.352058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.352074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.352096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.352112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.352133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.352149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.352170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.352186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.352207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.352223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.352244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.352260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.352281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.352297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.352348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.352366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.352389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.352406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.353278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.353311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.353358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.353376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.353399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.353416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.353438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.353455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.353477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.353493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.353515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.353532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.353554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.353570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.353592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.353609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.353631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.353648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.353669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.353686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.353708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.353724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.353745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.353761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.353788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.353805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.353828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.353845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.981 [2024-11-15 11:41:59.353867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.981 [2024-11-15 11:41:59.353884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.353906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.353923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.353945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.353962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.353984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.982 [2024-11-15 11:41:59.354000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.354443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.354467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.354495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.354513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.354536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.982 [2024-11-15 11:41:59.354553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.354575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.982 [2024-11-15 11:41:59.354592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.354613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.982 [2024-11-15 11:41:59.354629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.354651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.982 [2024-11-15 11:41:59.354667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.354689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.982 [2024-11-15 11:41:59.354710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.354733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.982 [2024-11-15 11:41:59.354765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.354787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.982 [2024-11-15 11:41:59.354803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.354824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.982 [2024-11-15 11:41:59.354840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.354861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.982 [2024-11-15 11:41:59.354894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.354916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.982 [2024-11-15 11:41:59.354933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.354955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.982 [2024-11-15 11:41:59.354971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.354993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.982 [2024-11-15 11:41:59.355010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.355031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.355047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.355069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.355086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.355108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.355124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.355146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.355162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.355183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.355204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.355227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.355243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.355265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.355281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.355309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.355327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.355349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.355365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.355387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.355404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.355426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.355442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.355463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.355479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.355502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.355519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.355541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.355557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.355579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.355595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.355618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.355635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:21.982 [2024-11-15 11:41:59.355657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.982 [2024-11-15 11:41:59.355673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.355700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.355717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.355740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.355756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.355778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.355794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.355817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.355833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.357406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.357432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.357460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.357479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.357502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.357519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.357541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.357558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.357580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.357597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.357619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.357635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.357657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.357674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.357696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.357712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.357740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.357757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.357779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.357795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.357817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.357833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.357854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.983 [2024-11-15 11:41:59.357870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.357892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.983 [2024-11-15 11:41:59.357908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.357930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.983 [2024-11-15 11:41:59.357946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.357967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.983 [2024-11-15 11:41:59.357984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.358006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.983 [2024-11-15 11:41:59.358022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.358059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.983 [2024-11-15 11:41:59.358075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.358096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.983 [2024-11-15 11:41:59.358112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.358151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.983 [2024-11-15 11:41:59.358168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.358189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.358205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.358227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.358248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.358270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.358287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.358316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.358334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.358357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.358373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.358395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.358411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.358433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.358449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.358471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.358487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.358509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.983 [2024-11-15 11:41:59.358525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.358546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.983 [2024-11-15 11:41:59.358563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.358585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.983 [2024-11-15 11:41:59.358616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.358639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.983 [2024-11-15 11:41:59.358654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:21.983 [2024-11-15 11:41:59.358675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.983 [2024-11-15 11:41:59.358691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.358712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.358732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.358754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.358770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.358791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.358807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.358827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.358843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.358864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.358880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.358901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.358916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.358937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.358953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.358974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.358990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.359011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.359027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.359049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.359065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.359087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.359103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.359124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.359141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.359162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.359178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.359204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.359220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.361650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.361678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.361707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.361725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.361748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.361765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.361788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.361820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.361842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.361858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.361879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.361895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.361917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.361933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.361955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.361971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.362007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.362025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.362047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.362064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.362085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.362101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.362129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.362146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.362168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.362184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.362205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.362222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.362243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.362260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.362282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.362298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.362329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.362346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.362369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.362386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.362408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.362424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.362446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.362463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.362484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.984 [2024-11-15 11:41:59.362500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.362521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.362537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:21.984 [2024-11-15 11:41:59.362559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.984 [2024-11-15 11:41:59.362575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.362613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.985 [2024-11-15 11:41:59.362635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.362658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.985 [2024-11-15 11:41:59.362674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.362695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.985 [2024-11-15 11:41:59.362711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.362748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.362765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.362787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.362803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.362825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.362841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.362863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.362879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.362901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.985 [2024-11-15 11:41:59.362917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.362939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.985 [2024-11-15 11:41:59.362955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.362977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.985 [2024-11-15 11:41:59.362993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.363016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.985 [2024-11-15 11:41:59.363032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.363055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.363087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.363696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.985 [2024-11-15 11:41:59.363725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.363753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.363772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.363794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.363811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.363832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.363848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.363870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.985 [2024-11-15 11:41:59.363886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.363908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.985 [2024-11-15 11:41:59.363939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.363961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.985 [2024-11-15 11:41:59.363977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.363997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.985 [2024-11-15 11:41:59.364013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.364035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.985 [2024-11-15 11:41:59.364050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.364071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.364087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.364108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.364124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.364145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.364160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.364181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.364201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.364224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.364239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.364260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.364276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.364324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.364342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.364365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.364380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.364402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.364419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.364441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.364457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.364479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.364495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.364517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.985 [2024-11-15 11:41:59.364534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.364555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.985 [2024-11-15 11:41:59.364571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:21.985 [2024-11-15 11:41:59.364593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.986 [2024-11-15 11:41:59.364610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.364633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.986 [2024-11-15 11:41:59.364649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.364671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.986 [2024-11-15 11:41:59.364687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.364720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.986 [2024-11-15 11:41:59.364739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.364761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.986 [2024-11-15 11:41:59.364778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.364804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.986 [2024-11-15 11:41:59.364820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.364843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.986 [2024-11-15 11:41:59.364860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.986 [2024-11-15 11:41:59.366279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.986 [2024-11-15 11:41:59.366350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.986 [2024-11-15 11:41:59.366391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.986 [2024-11-15 11:41:59.366430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.986 [2024-11-15 11:41:59.366469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.986 [2024-11-15 11:41:59.366507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.986 [2024-11-15 11:41:59.366546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.986 [2024-11-15 11:41:59.366585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.986 [2024-11-15 11:41:59.366634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.986 [2024-11-15 11:41:59.366674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.986 [2024-11-15 11:41:59.366716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.986 [2024-11-15 11:41:59.366755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.986 [2024-11-15 11:41:59.366794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.986 [2024-11-15 11:41:59.366832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.986 [2024-11-15 11:41:59.366870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.986 [2024-11-15 11:41:59.366908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.986 [2024-11-15 11:41:59.366945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.366968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.986 [2024-11-15 11:41:59.366984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:21.986 [2024-11-15 11:41:59.367005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.986 [2024-11-15 11:41:59.367021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.987 [2024-11-15 11:41:59.367060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.987 [2024-11-15 11:41:59.367107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.987 [2024-11-15 11:41:59.367150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.987 [2024-11-15 11:41:59.367189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.987 [2024-11-15 11:41:59.367226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.987 [2024-11-15 11:41:59.367264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.987 [2024-11-15 11:41:59.367309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.987 [2024-11-15 11:41:59.367349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.987 [2024-11-15 11:41:59.367388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.987 [2024-11-15 11:41:59.367426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.987 [2024-11-15 11:41:59.367464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.987 [2024-11-15 11:41:59.367502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.987 [2024-11-15 11:41:59.367540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.987 [2024-11-15 11:41:59.367584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.987 [2024-11-15 11:41:59.367624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.987 [2024-11-15 11:41:59.367662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.987 [2024-11-15 11:41:59.367700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.367723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.987 [2024-11-15 11:41:59.367740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.370182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.987 [2024-11-15 11:41:59.370209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.370243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.987 [2024-11-15 11:41:59.370261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.370284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.987 [2024-11-15 11:41:59.370310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.370337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.987 [2024-11-15 11:41:59.370358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.370380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.987 [2024-11-15 11:41:59.370397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.370418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.987 [2024-11-15 11:41:59.370435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.370457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.987 [2024-11-15 11:41:59.370473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.370494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.987 [2024-11-15 11:41:59.370510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.370538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.987 [2024-11-15 11:41:59.370555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.370577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.987 [2024-11-15 11:41:59.370593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.370615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.987 [2024-11-15 11:41:59.370632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.370653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.987 [2024-11-15 11:41:59.370670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.370691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.987 [2024-11-15 11:41:59.370707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.370729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.987 [2024-11-15 11:41:59.370745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:21.987 [2024-11-15 11:41:59.370770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.987 [2024-11-15 11:41:59.370787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.370811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.988 [2024-11-15 11:41:59.370828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.370850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.988 [2024-11-15 11:41:59.370866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.370888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.370904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.370926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.988 [2024-11-15 11:41:59.370943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.370964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.988 [2024-11-15 11:41:59.370980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.371007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.371024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.371046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.371062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.371083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.988 [2024-11-15 11:41:59.371099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.371121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.371137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.371159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.988 [2024-11-15 11:41:59.371175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.371197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.988 [2024-11-15 11:41:59.371213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.371235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.371251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.371273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.988 [2024-11-15 11:41:59.371289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.371321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.988 [2024-11-15 11:41:59.371351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.371373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.988 [2024-11-15 11:41:59.371390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.371412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.371428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.371450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.371467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.371489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.988 [2024-11-15 11:41:59.371509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.371532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.988 [2024-11-15 11:41:59.371549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.371571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.988 [2024-11-15 11:41:59.371587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.371609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.988 [2024-11-15 11:41:59.371625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.371647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.988 [2024-11-15 11:41:59.371663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.373199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.373225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.373253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.373272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.373294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.373322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.373347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.373363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.373385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.373401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.373423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.373440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.373461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.373478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.373499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.373521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.373543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.373560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.373582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.373599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.373620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.373636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.373658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.988 [2024-11-15 11:41:59.373674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.373696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.988 [2024-11-15 11:41:59.373712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:21.988 [2024-11-15 11:41:59.373734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.988 [2024-11-15 11:41:59.373750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.373772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.989 [2024-11-15 11:41:59.373788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.373810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.989 [2024-11-15 11:41:59.373826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.373848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.989 [2024-11-15 11:41:59.373865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.373902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.989 [2024-11-15 11:41:59.373918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.373939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.989 [2024-11-15 11:41:59.373970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.373991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.989 [2024-11-15 11:41:59.374006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.374031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.989 [2024-11-15 11:41:59.374047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.374067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.989 [2024-11-15 11:41:59.374083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.374104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.989 [2024-11-15 11:41:59.374119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.374139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.989 [2024-11-15 11:41:59.374154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.374175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.989 [2024-11-15 11:41:59.374190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.374211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.989 [2024-11-15 11:41:59.374227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.374247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.989 [2024-11-15 11:41:59.374262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.374282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.989 [2024-11-15 11:41:59.374323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.374347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.989 [2024-11-15 11:41:59.374363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.374385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.989 [2024-11-15 11:41:59.374402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.374423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.989 [2024-11-15 11:41:59.374439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.374461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.989 [2024-11-15 11:41:59.374476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.374503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.989 [2024-11-15 11:41:59.374520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.374542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.989 [2024-11-15 11:41:59.374558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.374598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.989 [2024-11-15 11:41:59.374615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.376980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.989 [2024-11-15 11:41:59.377006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.377034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.989 [2024-11-15 11:41:59.377053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.377076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.989 [2024-11-15 11:41:59.377092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.377129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.989 [2024-11-15 11:41:59.377145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.377167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.989 [2024-11-15 11:41:59.377198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.377218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.989 [2024-11-15 11:41:59.377233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.377254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.989 [2024-11-15 11:41:59.377269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.377315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.989 [2024-11-15 11:41:59.377335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.377362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.989 [2024-11-15 11:41:59.377379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.377401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.989 [2024-11-15 11:41:59.377422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.377446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.989 [2024-11-15 11:41:59.377462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.377484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.989 [2024-11-15 11:41:59.377501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:21.989 [2024-11-15 11:41:59.377523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.377540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.377562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.377578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.377600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.377616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.377647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.377664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.377686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.377702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.377724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.377755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.377776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.377791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.377811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.377826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.377846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.377861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.377882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.377901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.377937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.377953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.377975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.378006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.378029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.378045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.378067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.378083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.378105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.378121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.378143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.378159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.378181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.378197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.378218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.378234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.378256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.378272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.378294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.378318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.378341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.378357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.378379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.378395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.378425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.378441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.379627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.379653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.379681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.379699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.379722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.379738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.379760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.379776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.379798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.379814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.379836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.379852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.379874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.379890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.379912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.379928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.379950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.379966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.379988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.380004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.380040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.380056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.380082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.990 [2024-11-15 11:41:59.380114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.380137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.380153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.380175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.990 [2024-11-15 11:41:59.380191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:21.990 [2024-11-15 11:41:59.380213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.991 [2024-11-15 11:41:59.380229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.380250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.991 [2024-11-15 11:41:59.380267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.380288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.991 [2024-11-15 11:41:59.380312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.380336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.991 [2024-11-15 11:41:59.380353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.380375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.991 [2024-11-15 11:41:59.380391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.380412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.991 [2024-11-15 11:41:59.380428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.380450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.991 [2024-11-15 11:41:59.380466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.380488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.991 [2024-11-15 11:41:59.380504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.380526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.991 [2024-11-15 11:41:59.380542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.380563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.991 [2024-11-15 11:41:59.380584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.380607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.991 [2024-11-15 11:41:59.380624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.381420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.991 [2024-11-15 11:41:59.381445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.381473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.991 [2024-11-15 11:41:59.381492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.381514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.991 [2024-11-15 11:41:59.381531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.381553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.991 [2024-11-15 11:41:59.381569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.381606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.991 [2024-11-15 11:41:59.381622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.381643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.991 [2024-11-15 11:41:59.381673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.381696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.991 [2024-11-15 11:41:59.381712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.381750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.991 [2024-11-15 11:41:59.381766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.381788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.991 [2024-11-15 11:41:59.381804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.381825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.991 [2024-11-15 11:41:59.381842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.381863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.991 [2024-11-15 11:41:59.381885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.381908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.991 [2024-11-15 11:41:59.381924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.381947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.991 [2024-11-15 11:41:59.381963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.382415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.991 [2024-11-15 11:41:59.382440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.382468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.991 [2024-11-15 11:41:59.382486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.382509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.991 [2024-11-15 11:41:59.382525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.382546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.991 [2024-11-15 11:41:59.382562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:21.991 [2024-11-15 11:41:59.382592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.992 [2024-11-15 11:41:59.382608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.382629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.992 [2024-11-15 11:41:59.382645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.382666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.992 [2024-11-15 11:41:59.382682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.382704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.992 [2024-11-15 11:41:59.382720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.382741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.382757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.382778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.382795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.382822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.992 [2024-11-15 11:41:59.382853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.382875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.992 [2024-11-15 11:41:59.382890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.382927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.992 [2024-11-15 11:41:59.382943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.382965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.992 [2024-11-15 11:41:59.382981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.383002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.383019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.383041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.992 [2024-11-15 11:41:59.383057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.383078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.992 [2024-11-15 11:41:59.383095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.383116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.383133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.383155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.383171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.383193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.992 [2024-11-15 11:41:59.383224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.384735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.384760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.384787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.384805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.384834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.384852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.384874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.384891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.384912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.384929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.384951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.384967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.384988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.385005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.385027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.385059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.385081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.385097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.385118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.992 [2024-11-15 11:41:59.385134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.385155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.992 [2024-11-15 11:41:59.385171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.385191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.385207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.385228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.992 [2024-11-15 11:41:59.385244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.385265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.385281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.385334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.992 [2024-11-15 11:41:59.385352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.385374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.385390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.385412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.385428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.385449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.992 [2024-11-15 11:41:59.385465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:21.992 [2024-11-15 11:41:59.385487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.385503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.385524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.385540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.385562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.385578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.385615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.993 [2024-11-15 11:41:59.385632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.385653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.385669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.385690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.385706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.385727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.385742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.385763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.993 [2024-11-15 11:41:59.385779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.385800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.385835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.387703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.993 [2024-11-15 11:41:59.387729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.387756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.993 [2024-11-15 11:41:59.387775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.387798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.387814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.387835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.387851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.387873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.387889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.387911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.387927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.387949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.993 [2024-11-15 11:41:59.387965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.387987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.993 [2024-11-15 11:41:59.388003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.388024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.388041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.388062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.388079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.388101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.388117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.389265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.389296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.389334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.389353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.389375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.389392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.389413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.389430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.389452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.993 [2024-11-15 11:41:59.389468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.389490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.993 [2024-11-15 11:41:59.389506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.389528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.993 [2024-11-15 11:41:59.389544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.389566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.993 [2024-11-15 11:41:59.389597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.389620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.993 [2024-11-15 11:41:59.389635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.389672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.993 [2024-11-15 11:41:59.389687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.389707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.993 [2024-11-15 11:41:59.389723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.389743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.993 [2024-11-15 11:41:59.389774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.389796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.993 [2024-11-15 11:41:59.389812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.389855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.993 [2024-11-15 11:41:59.389872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:21.993 [2024-11-15 11:41:59.389893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.389910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.389931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.994 [2024-11-15 11:41:59.389947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.389969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.389984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.994 [2024-11-15 11:41:59.390022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.390060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.390098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.390150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.390187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.390224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.390276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.994 [2024-11-15 11:41:59.390339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.994 [2024-11-15 11:41:59.390384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.994 [2024-11-15 11:41:59.390422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.994 [2024-11-15 11:41:59.390460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.994 [2024-11-15 11:41:59.390498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.390535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.390573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.390625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.390661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.390697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.390732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.390768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.390803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.390843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.390880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.994 [2024-11-15 11:41:59.390915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.994 [2024-11-15 11:41:59.390951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.390971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.390986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.391007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.994 [2024-11-15 11:41:59.391023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.392929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.994 [2024-11-15 11:41:59.392967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.392994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.994 [2024-11-15 11:41:59.393027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.393051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.994 [2024-11-15 11:41:59.393067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.393089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.994 [2024-11-15 11:41:59.393105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.393127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.994 [2024-11-15 11:41:59.393143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.393165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.994 [2024-11-15 11:41:59.393180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.393202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.994 [2024-11-15 11:41:59.393224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:21.994 [2024-11-15 11:41:59.393248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.995 [2024-11-15 11:41:59.393265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.393675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.995 [2024-11-15 11:41:59.393701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.393728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.393746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.393768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.393785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.393807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.393823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.393845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.995 [2024-11-15 11:41:59.393862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.393884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.995 [2024-11-15 11:41:59.393900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.393937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.393953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.393974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.393989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.394025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.394061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.394115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.995 [2024-11-15 11:41:59.394160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.995 [2024-11-15 11:41:59.394198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.394235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.394272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.394321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.995 [2024-11-15 11:41:59.394361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.995 [2024-11-15 11:41:59.394399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.394437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.394475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.394513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.394550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.394587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.995 [2024-11-15 11:41:59.394631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.394669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.995 [2024-11-15 11:41:59.394706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.995 [2024-11-15 11:41:59.394745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.995 [2024-11-15 11:41:59.394782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.394835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.394887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.394923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:54056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.995 [2024-11-15 11:41:59.394958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:21.995 [2024-11-15 11:41:59.394979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.996 [2024-11-15 11:41:59.394994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.395015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.996 [2024-11-15 11:41:59.395045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.396417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.996 [2024-11-15 11:41:59.396443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.396470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.996 [2024-11-15 11:41:59.396496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.396520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.996 [2024-11-15 11:41:59.396537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.396559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.996 [2024-11-15 11:41:59.396576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.396598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.996 [2024-11-15 11:41:59.396614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.396636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.996 [2024-11-15 11:41:59.396652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.396674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.996 [2024-11-15 11:41:59.396691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.396714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.996 [2024-11-15 11:41:59.396730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.396752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.996 [2024-11-15 11:41:59.396768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.396789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.996 [2024-11-15 11:41:59.396806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.396828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.996 [2024-11-15 11:41:59.396844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.396865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.996 [2024-11-15 11:41:59.396881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.396904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.996 [2024-11-15 11:41:59.396920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.396942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.996 [2024-11-15 11:41:59.396963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.396986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.996 [2024-11-15 11:41:59.397002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.397024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.996 [2024-11-15 11:41:59.397055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.397077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.996 [2024-11-15 11:41:59.397093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.397114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.996 [2024-11-15 11:41:59.397130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.397167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.996 [2024-11-15 11:41:59.397182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.397202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.996 [2024-11-15 11:41:59.397218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.397239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.996 [2024-11-15 11:41:59.397254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.397274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.996 [2024-11-15 11:41:59.397313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.397338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.996 [2024-11-15 11:41:59.397370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.397393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.996 [2024-11-15 11:41:59.397409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.397430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.996 [2024-11-15 11:41:59.397446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.397467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.996 [2024-11-15 11:41:59.397484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.397510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.996 [2024-11-15 11:41:59.397527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.397548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.996 [2024-11-15 11:41:59.397564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:21.996 [2024-11-15 11:41:59.397600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.397617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.397638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.397668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.397690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.397706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.398287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.997 [2024-11-15 11:41:59.398319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.398347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.997 [2024-11-15 11:41:59.398365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.398388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.997 [2024-11-15 11:41:59.398405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.400392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.997 [2024-11-15 11:41:59.400417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.400444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.997 [2024-11-15 11:41:59.400462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.400485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.400502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.400524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.400541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.400568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.400585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.400608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.400640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.400662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.400677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.400714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.400729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.400749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.400764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.400785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.997 [2024-11-15 11:41:59.400800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.400820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.997 [2024-11-15 11:41:59.400836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.400873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.997 [2024-11-15 11:41:59.400890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.400911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.997 [2024-11-15 11:41:59.400928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.400950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.997 [2024-11-15 11:41:59.400965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.400987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.997 [2024-11-15 11:41:59.401003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.401025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.401041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.401068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.401085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.401107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.401123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.401145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.997 [2024-11-15 11:41:59.401162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.401183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.401199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.401221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.401237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.401259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.997 [2024-11-15 11:41:59.401275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.401297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.997 [2024-11-15 11:41:59.401322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.401345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.401362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.401384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.997 [2024-11-15 11:41:59.401400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.401422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.997 [2024-11-15 11:41:59.401438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.401460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.401476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.401498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.997 [2024-11-15 11:41:59.401514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.401536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.997 [2024-11-15 11:41:59.401557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:21.997 [2024-11-15 11:41:59.401598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.998 [2024-11-15 11:41:59.401615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.401637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.998 [2024-11-15 11:41:59.401668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.401691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.998 [2024-11-15 11:41:59.401707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.401729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.998 [2024-11-15 11:41:59.401744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.401766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.998 [2024-11-15 11:41:59.401783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.401805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.998 [2024-11-15 11:41:59.401822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.998 [2024-11-15 11:41:59.403234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.998 [2024-11-15 11:41:59.403311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.998 [2024-11-15 11:41:59.403354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.998 [2024-11-15 11:41:59.403393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.998 [2024-11-15 11:41:59.403431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.998 [2024-11-15 11:41:59.403473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.998 [2024-11-15 11:41:59.403528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.998 [2024-11-15 11:41:59.403566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.998 [2024-11-15 11:41:59.403619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.998 [2024-11-15 11:41:59.403655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.998 [2024-11-15 11:41:59.403690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.998 [2024-11-15 11:41:59.403725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.998 [2024-11-15 11:41:59.403761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.998 [2024-11-15 11:41:59.403796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.998 [2024-11-15 11:41:59.403831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.998 [2024-11-15 11:41:59.403867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.998 [2024-11-15 11:41:59.403902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:21.998 [2024-11-15 11:41:59.403938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:54096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.998 [2024-11-15 11:41:59.403978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:21.998 [2024-11-15 11:41:59.403999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.998 [2024-11-15 11:41:59.404014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:21.998 8024.88 IOPS, 31.35 MiB/s [2024-11-15T10:42:02.425Z] 8040.00 IOPS, 31.41 MiB/s [2024-11-15T10:42:02.425Z] 8056.59 IOPS, 31.47 MiB/s [2024-11-15T10:42:02.425Z] Received shutdown signal, test time was about 34.257744 seconds 00:23:21.998 00:23:21.998 Latency(us) 00:23:21.998 [2024-11-15T10:42:02.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.998 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:21.998 Verification LBA range: start 0x0 length 0x4000 00:23:21.998 Nvme0n1 : 34.26 8057.99 31.48 0.00 0.00 15857.95 388.36 4026531.84 00:23:21.998 [2024-11-15T10:42:02.425Z] =================================================================================================================== 00:23:21.998 [2024-11-15T10:42:02.425Z] Total : 8057.99 31.48 0.00 0.00 15857.95 388.36 4026531.84 00:23:21.998 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:22.257 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:22.257 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:22.257 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:22.257 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:22.257 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:22.257 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:22.257 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:22.257 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:22.257 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:22.257 rmmod nvme_tcp 00:23:22.257 rmmod nvme_fabrics 00:23:22.257 rmmod nvme_keyring 00:23:22.515 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:22.515 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:22.515 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:22.515 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3005911 ']' 00:23:22.515 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3005911 00:23:22.515 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3005911 ']' 00:23:22.515 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3005911 00:23:22.515 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:22.515 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.515 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3005911 00:23:22.515 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:22.515 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:22.515 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3005911' 00:23:22.515 killing process with pid 3005911 00:23:22.515 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3005911 00:23:22.515 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3005911 00:23:22.776 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:22.776 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:22.776 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:22.776 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:22.776 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:22.776 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:22.776 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:22.776 11:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:22.776 11:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:22.776 11:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.776 11:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.776 11:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.677 11:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:24.678 00:23:24.678 real 0m43.428s 00:23:24.678 user 2m12.046s 00:23:24.678 sys 0m10.650s 00:23:24.678 11:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.678 11:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:24.678 ************************************ 00:23:24.678 END TEST nvmf_host_multipath_status 00:23:24.678 ************************************ 00:23:24.678 11:42:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:24.678 11:42:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:24.678 11:42:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:24.678 11:42:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.678 ************************************ 00:23:24.678 START TEST nvmf_discovery_remove_ifc 00:23:24.678 ************************************ 00:23:24.678 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:24.938 * Looking for test storage... 00:23:24.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:24.938 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:24.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.939 --rc genhtml_branch_coverage=1 00:23:24.939 --rc genhtml_function_coverage=1 00:23:24.939 --rc genhtml_legend=1 00:23:24.939 --rc geninfo_all_blocks=1 00:23:24.939 --rc geninfo_unexecuted_blocks=1 00:23:24.939 00:23:24.939 ' 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:24.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.939 --rc genhtml_branch_coverage=1 00:23:24.939 --rc genhtml_function_coverage=1 00:23:24.939 --rc genhtml_legend=1 00:23:24.939 --rc geninfo_all_blocks=1 00:23:24.939 --rc geninfo_unexecuted_blocks=1 00:23:24.939 00:23:24.939 ' 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:24.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.939 --rc genhtml_branch_coverage=1 00:23:24.939 --rc genhtml_function_coverage=1 00:23:24.939 --rc genhtml_legend=1 00:23:24.939 --rc geninfo_all_blocks=1 00:23:24.939 --rc geninfo_unexecuted_blocks=1 00:23:24.939 00:23:24.939 ' 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:24.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.939 --rc genhtml_branch_coverage=1 00:23:24.939 --rc genhtml_function_coverage=1 00:23:24.939 --rc genhtml_legend=1 00:23:24.939 --rc geninfo_all_blocks=1 00:23:24.939 --rc geninfo_unexecuted_blocks=1 00:23:24.939 00:23:24.939 ' 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:24.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:24.939 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:24.940 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:24.940 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:24.940 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:24.940 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:24.940 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:24.940 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:24.940 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.940 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:24.940 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:24.940 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:24.940 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.940 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.940 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.940 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:24.940 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:24.940 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:23:24.940 11:42:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:27.477 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:27.477 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:27.477 Found net devices under 0000:09:00.0: cvl_0_0 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:27.477 Found net devices under 0000:09:00.1: cvl_0_1 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:27.477 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:27.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:23:27.478 00:23:27.478 --- 10.0.0.2 ping statistics --- 00:23:27.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.478 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:23:27.478 00:23:27.478 --- 10.0.0.1 ping statistics --- 00:23:27.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.478 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3012893 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3012893 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3012893 ']' 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.478 [2024-11-15 11:42:07.646848] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:23:27.478 [2024-11-15 11:42:07.646934] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.478 [2024-11-15 11:42:07.718126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.478 [2024-11-15 11:42:07.776920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.478 [2024-11-15 11:42:07.776959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.478 [2024-11-15 11:42:07.776988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.478 [2024-11-15 11:42:07.777001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.478 [2024-11-15 11:42:07.777011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.478 [2024-11-15 11:42:07.777564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:27.478 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.736 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.736 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:27.736 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.736 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.736 [2024-11-15 11:42:07.931988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.736 [2024-11-15 11:42:07.940169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:27.736 null0 00:23:27.736 [2024-11-15 11:42:07.972126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.736 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.736 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3013118 00:23:27.736 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:27.736 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3013118 /tmp/host.sock 00:23:27.736 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3013118 ']' 00:23:27.736 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:27.736 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.736 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:27.736 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:27.736 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.736 11:42:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.736 [2024-11-15 11:42:08.042183] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:23:27.736 [2024-11-15 11:42:08.042268] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3013118 ] 00:23:27.736 [2024-11-15 11:42:08.110512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.995 [2024-11-15 11:42:08.173261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.995 11:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.995 11:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:27.995 11:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:27.995 11:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:27.995 11:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.995 11:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.995 11:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.995 11:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:27.995 11:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.995 11:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.995 11:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.995 11:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:27.995 11:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.995 11:42:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:29.372 [2024-11-15 11:42:09.439498] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:29.372 [2024-11-15 11:42:09.439523] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:29.372 [2024-11-15 11:42:09.439551] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.372 [2024-11-15 11:42:09.525858] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:29.372 [2024-11-15 11:42:09.620723] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:29.372 [2024-11-15 11:42:09.621577] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xcdebe0:1 started. 00:23:29.372 [2024-11-15 11:42:09.623178] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:29.372 [2024-11-15 11:42:09.623232] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:29.372 [2024-11-15 11:42:09.623265] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:29.372 [2024-11-15 11:42:09.623299] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:29.372 [2024-11-15 11:42:09.623336] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:29.372 [2024-11-15 11:42:09.627999] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xcdebe0 was disconnected and freed. delete nvme_qpair. 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:29.372 11:42:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:30.749 11:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:30.749 11:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.749 11:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:30.749 11:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.749 11:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:30.749 11:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:30.749 11:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.749 11:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.749 11:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:30.749 11:42:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:31.685 11:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:31.685 11:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.685 11:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:31.685 11:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.685 11:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:31.685 11:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:31.685 11:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:31.685 11:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.685 11:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:31.685 11:42:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:32.621 11:42:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:32.621 11:42:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.621 11:42:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:32.621 11:42:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.621 11:42:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:32.621 11:42:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:32.621 11:42:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:32.621 11:42:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.621 11:42:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:32.621 11:42:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:33.558 11:42:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:33.558 11:42:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.558 11:42:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.558 11:42:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:33.558 11:42:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:33.558 11:42:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:33.558 11:42:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:33.558 11:42:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.558 11:42:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:33.558 11:42:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:34.932 11:42:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:34.932 11:42:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.932 11:42:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:34.932 11:42:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.932 11:42:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:34.932 11:42:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.932 11:42:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:34.932 11:42:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.932 11:42:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:34.932 11:42:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:34.932 [2024-11-15 11:42:15.064975] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:34.932 [2024-11-15 11:42:15.065055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.932 [2024-11-15 11:42:15.065077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.933 [2024-11-15 11:42:15.065110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.933 [2024-11-15 11:42:15.065123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.933 [2024-11-15 11:42:15.065136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.933 [2024-11-15 11:42:15.065149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.933 [2024-11-15 11:42:15.065162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.933 [2024-11-15 11:42:15.065175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.933 [2024-11-15 11:42:15.065189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.933 [2024-11-15 11:42:15.065201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.933 [2024-11-15 11:42:15.065214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbb400 is same with the state(6) to be set 00:23:34.933 [2024-11-15 11:42:15.074996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbb400 (9): Bad file descriptor 00:23:34.933 [2024-11-15 11:42:15.085037] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:34.933 [2024-11-15 11:42:15.085059] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:34.933 [2024-11-15 11:42:15.085069] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:34.933 [2024-11-15 11:42:15.085078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:34.933 [2024-11-15 11:42:15.085130] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:35.867 11:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:35.867 11:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.867 11:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:35.867 11:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.867 11:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.867 11:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:35.867 11:42:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:35.867 [2024-11-15 11:42:16.150345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:35.867 [2024-11-15 11:42:16.150424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcbb400 with addr=10.0.0.2, port=4420 00:23:35.867 [2024-11-15 11:42:16.150459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbb400 is same with the state(6) to be set 00:23:35.867 [2024-11-15 11:42:16.150509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbb400 (9): Bad file descriptor 00:23:35.867 [2024-11-15 11:42:16.150981] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:35.867 [2024-11-15 11:42:16.151033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:35.867 [2024-11-15 11:42:16.151050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:35.867 [2024-11-15 11:42:16.151064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:35.867 [2024-11-15 11:42:16.151077] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:35.867 [2024-11-15 11:42:16.151086] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:35.867 [2024-11-15 11:42:16.151094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:35.867 [2024-11-15 11:42:16.151113] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:35.867 [2024-11-15 11:42:16.151122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:35.867 11:42:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.867 11:42:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:35.867 11:42:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:36.802 [2024-11-15 11:42:17.153613] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:36.802 [2024-11-15 11:42:17.153660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:36.802 [2024-11-15 11:42:17.153685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:36.802 [2024-11-15 11:42:17.153714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:36.802 [2024-11-15 11:42:17.153728] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:23:36.802 [2024-11-15 11:42:17.153742] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:36.802 [2024-11-15 11:42:17.153752] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:36.802 [2024-11-15 11:42:17.153759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:36.802 [2024-11-15 11:42:17.153800] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:36.802 [2024-11-15 11:42:17.153863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.802 [2024-11-15 11:42:17.153884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.802 [2024-11-15 11:42:17.153902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.802 [2024-11-15 11:42:17.153915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.802 [2024-11-15 11:42:17.153928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.802 [2024-11-15 11:42:17.153941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.802 [2024-11-15 11:42:17.153961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.802 [2024-11-15 11:42:17.153975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.802 [2024-11-15 11:42:17.153989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.802 [2024-11-15 11:42:17.154001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.802 [2024-11-15 11:42:17.154015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:23:36.802 [2024-11-15 11:42:17.154066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcaab40 (9): Bad file descriptor 00:23:36.803 [2024-11-15 11:42:17.155052] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:36.803 [2024-11-15 11:42:17.155074] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:23:36.803 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:36.803 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.803 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:36.803 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.803 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:36.803 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:36.803 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:36.803 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.803 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:36.803 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.803 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.061 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:37.061 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.061 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.061 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.061 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.061 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.061 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.061 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.061 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.061 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:37.061 11:42:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:37.994 11:42:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.994 11:42:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.994 11:42:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.994 11:42:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.994 11:42:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.994 11:42:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.995 11:42:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.995 11:42:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.995 11:42:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:37.995 11:42:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:38.928 [2024-11-15 11:42:19.205911] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:38.928 [2024-11-15 11:42:19.205936] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:38.928 [2024-11-15 11:42:19.205957] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:38.928 [2024-11-15 11:42:19.333383] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:38.928 11:42:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:38.928 11:42:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.928 11:42:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:38.928 11:42:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:38.928 11:42:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.929 11:42:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:38.929 11:42:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:38.929 11:42:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.187 11:42:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:39.187 11:42:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:39.187 [2024-11-15 11:42:19.556618] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:23:39.187 [2024-11-15 11:42:19.557356] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xce9390:1 started. 00:23:39.187 [2024-11-15 11:42:19.558675] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:39.187 [2024-11-15 11:42:19.558717] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:39.187 [2024-11-15 11:42:19.558750] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:39.187 [2024-11-15 11:42:19.558770] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:39.187 [2024-11-15 11:42:19.558782] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:39.187 [2024-11-15 11:42:19.564509] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xce9390 was disconnected and freed. delete nvme_qpair. 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3013118 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3013118 ']' 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3013118 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3013118 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3013118' 00:23:40.121 killing process with pid 3013118 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3013118 00:23:40.121 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3013118 00:23:40.378 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:40.379 rmmod nvme_tcp 00:23:40.379 rmmod nvme_fabrics 00:23:40.379 rmmod nvme_keyring 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3012893 ']' 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3012893 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3012893 ']' 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3012893 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3012893 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3012893' 00:23:40.379 killing process with pid 3012893 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3012893 00:23:40.379 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3012893 00:23:40.637 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:40.637 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:40.637 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:40.637 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:23:40.637 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:23:40.637 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:40.637 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:23:40.637 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:40.637 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:40.637 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.637 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.637 11:42:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.588 11:42:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:42.588 00:23:42.588 real 0m17.885s 00:23:42.588 user 0m25.714s 00:23:42.588 sys 0m3.157s 00:23:42.867 11:42:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:42.867 11:42:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:42.867 ************************************ 00:23:42.867 END TEST nvmf_discovery_remove_ifc 00:23:42.867 ************************************ 00:23:42.867 11:42:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.868 ************************************ 00:23:42.868 START TEST nvmf_identify_kernel_target 00:23:42.868 ************************************ 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:42.868 * Looking for test storage... 00:23:42.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:42.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.868 --rc genhtml_branch_coverage=1 00:23:42.868 --rc genhtml_function_coverage=1 00:23:42.868 --rc genhtml_legend=1 00:23:42.868 --rc geninfo_all_blocks=1 00:23:42.868 --rc geninfo_unexecuted_blocks=1 00:23:42.868 00:23:42.868 ' 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:42.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.868 --rc genhtml_branch_coverage=1 00:23:42.868 --rc genhtml_function_coverage=1 00:23:42.868 --rc genhtml_legend=1 00:23:42.868 --rc geninfo_all_blocks=1 00:23:42.868 --rc geninfo_unexecuted_blocks=1 00:23:42.868 00:23:42.868 ' 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:42.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.868 --rc genhtml_branch_coverage=1 00:23:42.868 --rc genhtml_function_coverage=1 00:23:42.868 --rc genhtml_legend=1 00:23:42.868 --rc geninfo_all_blocks=1 00:23:42.868 --rc geninfo_unexecuted_blocks=1 00:23:42.868 00:23:42.868 ' 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:42.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.868 --rc genhtml_branch_coverage=1 00:23:42.868 --rc genhtml_function_coverage=1 00:23:42.868 --rc genhtml_legend=1 00:23:42.868 --rc geninfo_all_blocks=1 00:23:42.868 --rc geninfo_unexecuted_blocks=1 00:23:42.868 00:23:42.868 ' 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:42.868 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:42.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:23:42.869 11:42:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:45.404 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:45.404 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.404 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:45.405 Found net devices under 0000:09:00.0: cvl_0_0 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:45.405 Found net devices under 0000:09:00.1: cvl_0_1 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:45.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:23:45.405 00:23:45.405 --- 10.0.0.2 ping statistics --- 00:23:45.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.405 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:23:45.405 00:23:45.405 --- 10.0.0.1 ping statistics --- 00:23:45.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.405 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:45.405 11:42:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:46.340 Waiting for block devices as requested 00:23:46.340 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:46.340 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:46.600 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:46.600 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:46.600 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:46.858 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:46.858 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:46.858 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:46.858 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:23:47.118 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:47.118 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:47.119 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:47.378 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:47.378 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:47.378 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:47.636 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:47.636 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:47.636 11:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:47.636 11:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:47.636 11:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:47.636 11:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:47.636 11:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:47.636 11:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:47.636 11:42:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:47.636 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:47.636 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:47.636 No valid GPT data, bailing 00:23:47.636 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:47.636 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:47.636 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:47.636 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:47.636 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:23:47.636 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:47.895 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:47.895 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:47.895 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:47.895 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:23:47.895 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:23:47.895 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:23:47.895 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:47.895 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:23:47.895 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:23:47.895 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:23:47.895 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:47.895 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:23:47.895 00:23:47.895 Discovery Log Number of Records 2, Generation counter 2 00:23:47.895 =====Discovery Log Entry 0====== 00:23:47.895 trtype: tcp 00:23:47.895 adrfam: ipv4 00:23:47.895 subtype: current discovery subsystem 00:23:47.895 treq: not specified, sq flow control disable supported 00:23:47.895 portid: 1 00:23:47.895 trsvcid: 4420 00:23:47.895 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:47.895 traddr: 10.0.0.1 00:23:47.895 eflags: none 00:23:47.895 sectype: none 00:23:47.895 =====Discovery Log Entry 1====== 00:23:47.895 trtype: tcp 00:23:47.895 adrfam: ipv4 00:23:47.895 subtype: nvme subsystem 00:23:47.895 treq: not specified, sq flow control disable supported 00:23:47.895 portid: 1 00:23:47.895 trsvcid: 4420 00:23:47.895 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:47.895 traddr: 10.0.0.1 00:23:47.895 eflags: none 00:23:47.895 sectype: none 00:23:47.895 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:47.895 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:47.895 ===================================================== 00:23:47.895 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:47.895 ===================================================== 00:23:47.895 Controller Capabilities/Features 00:23:47.895 ================================ 00:23:47.895 Vendor ID: 0000 00:23:47.895 Subsystem Vendor ID: 0000 00:23:47.895 Serial Number: 7a572342968c744f4e1d 00:23:47.895 Model Number: Linux 00:23:47.895 Firmware Version: 6.8.9-20 00:23:47.895 Recommended Arb Burst: 0 00:23:47.895 IEEE OUI Identifier: 00 00 00 00:23:47.895 Multi-path I/O 00:23:47.895 May have multiple subsystem ports: No 00:23:47.895 May have multiple controllers: No 00:23:47.895 Associated with SR-IOV VF: No 00:23:47.895 Max Data Transfer Size: Unlimited 00:23:47.895 Max Number of Namespaces: 0 00:23:47.895 Max Number of I/O Queues: 1024 00:23:47.895 NVMe Specification Version (VS): 1.3 00:23:47.895 NVMe Specification Version (Identify): 1.3 00:23:47.895 Maximum Queue Entries: 1024 00:23:47.895 Contiguous Queues Required: No 00:23:47.895 Arbitration Mechanisms Supported 00:23:47.895 Weighted Round Robin: Not Supported 00:23:47.895 Vendor Specific: Not Supported 00:23:47.895 Reset Timeout: 7500 ms 00:23:47.895 Doorbell Stride: 4 bytes 00:23:47.895 NVM Subsystem Reset: Not Supported 00:23:47.895 Command Sets Supported 00:23:47.895 NVM Command Set: Supported 00:23:47.895 Boot Partition: Not Supported 00:23:47.895 Memory Page Size Minimum: 4096 bytes 00:23:47.895 Memory Page Size Maximum: 4096 bytes 00:23:47.895 Persistent Memory Region: Not Supported 00:23:47.895 Optional Asynchronous Events Supported 00:23:47.895 Namespace Attribute Notices: Not Supported 00:23:47.895 Firmware Activation Notices: Not Supported 00:23:47.895 ANA Change Notices: Not Supported 00:23:47.895 PLE Aggregate Log Change Notices: Not Supported 00:23:47.895 LBA Status Info Alert Notices: Not Supported 00:23:47.895 EGE Aggregate Log Change Notices: Not Supported 00:23:47.895 Normal NVM Subsystem Shutdown event: Not Supported 00:23:47.895 Zone Descriptor Change Notices: Not Supported 00:23:47.895 Discovery Log Change Notices: Supported 00:23:47.895 Controller Attributes 00:23:47.895 128-bit Host Identifier: Not Supported 00:23:47.895 Non-Operational Permissive Mode: Not Supported 00:23:47.895 NVM Sets: Not Supported 00:23:47.895 Read Recovery Levels: Not Supported 00:23:47.895 Endurance Groups: Not Supported 00:23:47.895 Predictable Latency Mode: Not Supported 00:23:47.895 Traffic Based Keep ALive: Not Supported 00:23:47.895 Namespace Granularity: Not Supported 00:23:47.895 SQ Associations: Not Supported 00:23:47.895 UUID List: Not Supported 00:23:47.895 Multi-Domain Subsystem: Not Supported 00:23:47.895 Fixed Capacity Management: Not Supported 00:23:47.895 Variable Capacity Management: Not Supported 00:23:47.895 Delete Endurance Group: Not Supported 00:23:47.895 Delete NVM Set: Not Supported 00:23:47.895 Extended LBA Formats Supported: Not Supported 00:23:47.895 Flexible Data Placement Supported: Not Supported 00:23:47.895 00:23:47.895 Controller Memory Buffer Support 00:23:47.895 ================================ 00:23:47.895 Supported: No 00:23:47.895 00:23:47.895 Persistent Memory Region Support 00:23:47.895 ================================ 00:23:47.895 Supported: No 00:23:47.895 00:23:47.895 Admin Command Set Attributes 00:23:47.895 ============================ 00:23:47.895 Security Send/Receive: Not Supported 00:23:47.895 Format NVM: Not Supported 00:23:47.895 Firmware Activate/Download: Not Supported 00:23:47.895 Namespace Management: Not Supported 00:23:47.895 Device Self-Test: Not Supported 00:23:47.895 Directives: Not Supported 00:23:47.895 NVMe-MI: Not Supported 00:23:47.895 Virtualization Management: Not Supported 00:23:47.895 Doorbell Buffer Config: Not Supported 00:23:47.895 Get LBA Status Capability: Not Supported 00:23:47.895 Command & Feature Lockdown Capability: Not Supported 00:23:47.895 Abort Command Limit: 1 00:23:47.895 Async Event Request Limit: 1 00:23:47.895 Number of Firmware Slots: N/A 00:23:47.895 Firmware Slot 1 Read-Only: N/A 00:23:47.895 Firmware Activation Without Reset: N/A 00:23:47.895 Multiple Update Detection Support: N/A 00:23:47.895 Firmware Update Granularity: No Information Provided 00:23:47.895 Per-Namespace SMART Log: No 00:23:47.895 Asymmetric Namespace Access Log Page: Not Supported 00:23:47.895 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:47.895 Command Effects Log Page: Not Supported 00:23:47.895 Get Log Page Extended Data: Supported 00:23:47.895 Telemetry Log Pages: Not Supported 00:23:47.895 Persistent Event Log Pages: Not Supported 00:23:47.895 Supported Log Pages Log Page: May Support 00:23:47.895 Commands Supported & Effects Log Page: Not Supported 00:23:47.895 Feature Identifiers & Effects Log Page:May Support 00:23:47.895 NVMe-MI Commands & Effects Log Page: May Support 00:23:47.895 Data Area 4 for Telemetry Log: Not Supported 00:23:47.895 Error Log Page Entries Supported: 1 00:23:47.895 Keep Alive: Not Supported 00:23:47.895 00:23:47.895 NVM Command Set Attributes 00:23:47.895 ========================== 00:23:47.895 Submission Queue Entry Size 00:23:47.895 Max: 1 00:23:47.895 Min: 1 00:23:47.895 Completion Queue Entry Size 00:23:47.895 Max: 1 00:23:47.895 Min: 1 00:23:47.895 Number of Namespaces: 0 00:23:47.895 Compare Command: Not Supported 00:23:47.895 Write Uncorrectable Command: Not Supported 00:23:47.895 Dataset Management Command: Not Supported 00:23:47.895 Write Zeroes Command: Not Supported 00:23:47.895 Set Features Save Field: Not Supported 00:23:47.895 Reservations: Not Supported 00:23:47.895 Timestamp: Not Supported 00:23:47.895 Copy: Not Supported 00:23:47.895 Volatile Write Cache: Not Present 00:23:47.895 Atomic Write Unit (Normal): 1 00:23:47.895 Atomic Write Unit (PFail): 1 00:23:47.895 Atomic Compare & Write Unit: 1 00:23:47.895 Fused Compare & Write: Not Supported 00:23:47.895 Scatter-Gather List 00:23:47.895 SGL Command Set: Supported 00:23:47.895 SGL Keyed: Not Supported 00:23:47.895 SGL Bit Bucket Descriptor: Not Supported 00:23:47.895 SGL Metadata Pointer: Not Supported 00:23:47.895 Oversized SGL: Not Supported 00:23:47.896 SGL Metadata Address: Not Supported 00:23:47.896 SGL Offset: Supported 00:23:47.896 Transport SGL Data Block: Not Supported 00:23:47.896 Replay Protected Memory Block: Not Supported 00:23:47.896 00:23:47.896 Firmware Slot Information 00:23:47.896 ========================= 00:23:47.896 Active slot: 0 00:23:47.896 00:23:47.896 00:23:47.896 Error Log 00:23:47.896 ========= 00:23:47.896 00:23:47.896 Active Namespaces 00:23:47.896 ================= 00:23:47.896 Discovery Log Page 00:23:47.896 ================== 00:23:47.896 Generation Counter: 2 00:23:47.896 Number of Records: 2 00:23:47.896 Record Format: 0 00:23:47.896 00:23:47.896 Discovery Log Entry 0 00:23:47.896 ---------------------- 00:23:47.896 Transport Type: 3 (TCP) 00:23:47.896 Address Family: 1 (IPv4) 00:23:47.896 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:47.896 Entry Flags: 00:23:47.896 Duplicate Returned Information: 0 00:23:47.896 Explicit Persistent Connection Support for Discovery: 0 00:23:47.896 Transport Requirements: 00:23:47.896 Secure Channel: Not Specified 00:23:47.896 Port ID: 1 (0x0001) 00:23:47.896 Controller ID: 65535 (0xffff) 00:23:47.896 Admin Max SQ Size: 32 00:23:47.896 Transport Service Identifier: 4420 00:23:47.896 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:47.896 Transport Address: 10.0.0.1 00:23:47.896 Discovery Log Entry 1 00:23:47.896 ---------------------- 00:23:47.896 Transport Type: 3 (TCP) 00:23:47.896 Address Family: 1 (IPv4) 00:23:47.896 Subsystem Type: 2 (NVM Subsystem) 00:23:47.896 Entry Flags: 00:23:47.896 Duplicate Returned Information: 0 00:23:47.896 Explicit Persistent Connection Support for Discovery: 0 00:23:47.896 Transport Requirements: 00:23:47.896 Secure Channel: Not Specified 00:23:47.896 Port ID: 1 (0x0001) 00:23:47.896 Controller ID: 65535 (0xffff) 00:23:47.896 Admin Max SQ Size: 32 00:23:47.896 Transport Service Identifier: 4420 00:23:47.896 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:47.896 Transport Address: 10.0.0.1 00:23:47.896 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:48.155 get_feature(0x01) failed 00:23:48.155 get_feature(0x02) failed 00:23:48.155 get_feature(0x04) failed 00:23:48.155 ===================================================== 00:23:48.155 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:48.155 ===================================================== 00:23:48.155 Controller Capabilities/Features 00:23:48.155 ================================ 00:23:48.155 Vendor ID: 0000 00:23:48.155 Subsystem Vendor ID: 0000 00:23:48.155 Serial Number: abe1dc94bba1ddf94e77 00:23:48.155 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:48.155 Firmware Version: 6.8.9-20 00:23:48.155 Recommended Arb Burst: 6 00:23:48.155 IEEE OUI Identifier: 00 00 00 00:23:48.155 Multi-path I/O 00:23:48.155 May have multiple subsystem ports: Yes 00:23:48.155 May have multiple controllers: Yes 00:23:48.155 Associated with SR-IOV VF: No 00:23:48.155 Max Data Transfer Size: Unlimited 00:23:48.155 Max Number of Namespaces: 1024 00:23:48.155 Max Number of I/O Queues: 128 00:23:48.155 NVMe Specification Version (VS): 1.3 00:23:48.155 NVMe Specification Version (Identify): 1.3 00:23:48.155 Maximum Queue Entries: 1024 00:23:48.155 Contiguous Queues Required: No 00:23:48.155 Arbitration Mechanisms Supported 00:23:48.155 Weighted Round Robin: Not Supported 00:23:48.155 Vendor Specific: Not Supported 00:23:48.155 Reset Timeout: 7500 ms 00:23:48.155 Doorbell Stride: 4 bytes 00:23:48.155 NVM Subsystem Reset: Not Supported 00:23:48.155 Command Sets Supported 00:23:48.155 NVM Command Set: Supported 00:23:48.155 Boot Partition: Not Supported 00:23:48.155 Memory Page Size Minimum: 4096 bytes 00:23:48.155 Memory Page Size Maximum: 4096 bytes 00:23:48.155 Persistent Memory Region: Not Supported 00:23:48.155 Optional Asynchronous Events Supported 00:23:48.155 Namespace Attribute Notices: Supported 00:23:48.155 Firmware Activation Notices: Not Supported 00:23:48.155 ANA Change Notices: Supported 00:23:48.155 PLE Aggregate Log Change Notices: Not Supported 00:23:48.156 LBA Status Info Alert Notices: Not Supported 00:23:48.156 EGE Aggregate Log Change Notices: Not Supported 00:23:48.156 Normal NVM Subsystem Shutdown event: Not Supported 00:23:48.156 Zone Descriptor Change Notices: Not Supported 00:23:48.156 Discovery Log Change Notices: Not Supported 00:23:48.156 Controller Attributes 00:23:48.156 128-bit Host Identifier: Supported 00:23:48.156 Non-Operational Permissive Mode: Not Supported 00:23:48.156 NVM Sets: Not Supported 00:23:48.156 Read Recovery Levels: Not Supported 00:23:48.156 Endurance Groups: Not Supported 00:23:48.156 Predictable Latency Mode: Not Supported 00:23:48.156 Traffic Based Keep ALive: Supported 00:23:48.156 Namespace Granularity: Not Supported 00:23:48.156 SQ Associations: Not Supported 00:23:48.156 UUID List: Not Supported 00:23:48.156 Multi-Domain Subsystem: Not Supported 00:23:48.156 Fixed Capacity Management: Not Supported 00:23:48.156 Variable Capacity Management: Not Supported 00:23:48.156 Delete Endurance Group: Not Supported 00:23:48.156 Delete NVM Set: Not Supported 00:23:48.156 Extended LBA Formats Supported: Not Supported 00:23:48.156 Flexible Data Placement Supported: Not Supported 00:23:48.156 00:23:48.156 Controller Memory Buffer Support 00:23:48.156 ================================ 00:23:48.156 Supported: No 00:23:48.156 00:23:48.156 Persistent Memory Region Support 00:23:48.156 ================================ 00:23:48.156 Supported: No 00:23:48.156 00:23:48.156 Admin Command Set Attributes 00:23:48.156 ============================ 00:23:48.156 Security Send/Receive: Not Supported 00:23:48.156 Format NVM: Not Supported 00:23:48.156 Firmware Activate/Download: Not Supported 00:23:48.156 Namespace Management: Not Supported 00:23:48.156 Device Self-Test: Not Supported 00:23:48.156 Directives: Not Supported 00:23:48.156 NVMe-MI: Not Supported 00:23:48.156 Virtualization Management: Not Supported 00:23:48.156 Doorbell Buffer Config: Not Supported 00:23:48.156 Get LBA Status Capability: Not Supported 00:23:48.156 Command & Feature Lockdown Capability: Not Supported 00:23:48.156 Abort Command Limit: 4 00:23:48.156 Async Event Request Limit: 4 00:23:48.156 Number of Firmware Slots: N/A 00:23:48.156 Firmware Slot 1 Read-Only: N/A 00:23:48.156 Firmware Activation Without Reset: N/A 00:23:48.156 Multiple Update Detection Support: N/A 00:23:48.156 Firmware Update Granularity: No Information Provided 00:23:48.156 Per-Namespace SMART Log: Yes 00:23:48.156 Asymmetric Namespace Access Log Page: Supported 00:23:48.156 ANA Transition Time : 10 sec 00:23:48.156 00:23:48.156 Asymmetric Namespace Access Capabilities 00:23:48.156 ANA Optimized State : Supported 00:23:48.156 ANA Non-Optimized State : Supported 00:23:48.156 ANA Inaccessible State : Supported 00:23:48.156 ANA Persistent Loss State : Supported 00:23:48.156 ANA Change State : Supported 00:23:48.156 ANAGRPID is not changed : No 00:23:48.156 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:48.156 00:23:48.156 ANA Group Identifier Maximum : 128 00:23:48.156 Number of ANA Group Identifiers : 128 00:23:48.156 Max Number of Allowed Namespaces : 1024 00:23:48.156 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:48.156 Command Effects Log Page: Supported 00:23:48.156 Get Log Page Extended Data: Supported 00:23:48.156 Telemetry Log Pages: Not Supported 00:23:48.156 Persistent Event Log Pages: Not Supported 00:23:48.156 Supported Log Pages Log Page: May Support 00:23:48.156 Commands Supported & Effects Log Page: Not Supported 00:23:48.156 Feature Identifiers & Effects Log Page:May Support 00:23:48.156 NVMe-MI Commands & Effects Log Page: May Support 00:23:48.156 Data Area 4 for Telemetry Log: Not Supported 00:23:48.156 Error Log Page Entries Supported: 128 00:23:48.156 Keep Alive: Supported 00:23:48.156 Keep Alive Granularity: 1000 ms 00:23:48.156 00:23:48.156 NVM Command Set Attributes 00:23:48.156 ========================== 00:23:48.156 Submission Queue Entry Size 00:23:48.156 Max: 64 00:23:48.156 Min: 64 00:23:48.156 Completion Queue Entry Size 00:23:48.156 Max: 16 00:23:48.156 Min: 16 00:23:48.156 Number of Namespaces: 1024 00:23:48.156 Compare Command: Not Supported 00:23:48.156 Write Uncorrectable Command: Not Supported 00:23:48.156 Dataset Management Command: Supported 00:23:48.156 Write Zeroes Command: Supported 00:23:48.156 Set Features Save Field: Not Supported 00:23:48.156 Reservations: Not Supported 00:23:48.156 Timestamp: Not Supported 00:23:48.156 Copy: Not Supported 00:23:48.156 Volatile Write Cache: Present 00:23:48.156 Atomic Write Unit (Normal): 1 00:23:48.156 Atomic Write Unit (PFail): 1 00:23:48.156 Atomic Compare & Write Unit: 1 00:23:48.156 Fused Compare & Write: Not Supported 00:23:48.156 Scatter-Gather List 00:23:48.156 SGL Command Set: Supported 00:23:48.156 SGL Keyed: Not Supported 00:23:48.156 SGL Bit Bucket Descriptor: Not Supported 00:23:48.156 SGL Metadata Pointer: Not Supported 00:23:48.156 Oversized SGL: Not Supported 00:23:48.156 SGL Metadata Address: Not Supported 00:23:48.156 SGL Offset: Supported 00:23:48.156 Transport SGL Data Block: Not Supported 00:23:48.156 Replay Protected Memory Block: Not Supported 00:23:48.156 00:23:48.156 Firmware Slot Information 00:23:48.156 ========================= 00:23:48.156 Active slot: 0 00:23:48.156 00:23:48.156 Asymmetric Namespace Access 00:23:48.156 =========================== 00:23:48.156 Change Count : 0 00:23:48.156 Number of ANA Group Descriptors : 1 00:23:48.156 ANA Group Descriptor : 0 00:23:48.156 ANA Group ID : 1 00:23:48.156 Number of NSID Values : 1 00:23:48.156 Change Count : 0 00:23:48.156 ANA State : 1 00:23:48.156 Namespace Identifier : 1 00:23:48.156 00:23:48.156 Commands Supported and Effects 00:23:48.156 ============================== 00:23:48.156 Admin Commands 00:23:48.156 -------------- 00:23:48.156 Get Log Page (02h): Supported 00:23:48.156 Identify (06h): Supported 00:23:48.156 Abort (08h): Supported 00:23:48.156 Set Features (09h): Supported 00:23:48.156 Get Features (0Ah): Supported 00:23:48.156 Asynchronous Event Request (0Ch): Supported 00:23:48.156 Keep Alive (18h): Supported 00:23:48.156 I/O Commands 00:23:48.156 ------------ 00:23:48.156 Flush (00h): Supported 00:23:48.156 Write (01h): Supported LBA-Change 00:23:48.156 Read (02h): Supported 00:23:48.156 Write Zeroes (08h): Supported LBA-Change 00:23:48.156 Dataset Management (09h): Supported 00:23:48.156 00:23:48.156 Error Log 00:23:48.156 ========= 00:23:48.156 Entry: 0 00:23:48.156 Error Count: 0x3 00:23:48.156 Submission Queue Id: 0x0 00:23:48.156 Command Id: 0x5 00:23:48.156 Phase Bit: 0 00:23:48.156 Status Code: 0x2 00:23:48.156 Status Code Type: 0x0 00:23:48.156 Do Not Retry: 1 00:23:48.156 Error Location: 0x28 00:23:48.156 LBA: 0x0 00:23:48.156 Namespace: 0x0 00:23:48.156 Vendor Log Page: 0x0 00:23:48.156 ----------- 00:23:48.156 Entry: 1 00:23:48.156 Error Count: 0x2 00:23:48.156 Submission Queue Id: 0x0 00:23:48.156 Command Id: 0x5 00:23:48.156 Phase Bit: 0 00:23:48.156 Status Code: 0x2 00:23:48.156 Status Code Type: 0x0 00:23:48.156 Do Not Retry: 1 00:23:48.156 Error Location: 0x28 00:23:48.156 LBA: 0x0 00:23:48.156 Namespace: 0x0 00:23:48.156 Vendor Log Page: 0x0 00:23:48.156 ----------- 00:23:48.156 Entry: 2 00:23:48.156 Error Count: 0x1 00:23:48.156 Submission Queue Id: 0x0 00:23:48.156 Command Id: 0x4 00:23:48.156 Phase Bit: 0 00:23:48.156 Status Code: 0x2 00:23:48.156 Status Code Type: 0x0 00:23:48.156 Do Not Retry: 1 00:23:48.156 Error Location: 0x28 00:23:48.156 LBA: 0x0 00:23:48.156 Namespace: 0x0 00:23:48.156 Vendor Log Page: 0x0 00:23:48.156 00:23:48.156 Number of Queues 00:23:48.156 ================ 00:23:48.156 Number of I/O Submission Queues: 128 00:23:48.156 Number of I/O Completion Queues: 128 00:23:48.156 00:23:48.156 ZNS Specific Controller Data 00:23:48.156 ============================ 00:23:48.156 Zone Append Size Limit: 0 00:23:48.156 00:23:48.156 00:23:48.156 Active Namespaces 00:23:48.156 ================= 00:23:48.156 get_feature(0x05) failed 00:23:48.156 Namespace ID:1 00:23:48.156 Command Set Identifier: NVM (00h) 00:23:48.156 Deallocate: Supported 00:23:48.156 Deallocated/Unwritten Error: Not Supported 00:23:48.156 Deallocated Read Value: Unknown 00:23:48.156 Deallocate in Write Zeroes: Not Supported 00:23:48.156 Deallocated Guard Field: 0xFFFF 00:23:48.156 Flush: Supported 00:23:48.156 Reservation: Not Supported 00:23:48.156 Namespace Sharing Capabilities: Multiple Controllers 00:23:48.156 Size (in LBAs): 1953525168 (931GiB) 00:23:48.157 Capacity (in LBAs): 1953525168 (931GiB) 00:23:48.157 Utilization (in LBAs): 1953525168 (931GiB) 00:23:48.157 UUID: c5b64a24-481d-46b6-9da3-8ad9df7c45df 00:23:48.157 Thin Provisioning: Not Supported 00:23:48.157 Per-NS Atomic Units: Yes 00:23:48.157 Atomic Boundary Size (Normal): 0 00:23:48.157 Atomic Boundary Size (PFail): 0 00:23:48.157 Atomic Boundary Offset: 0 00:23:48.157 NGUID/EUI64 Never Reused: No 00:23:48.157 ANA group ID: 1 00:23:48.157 Namespace Write Protected: No 00:23:48.157 Number of LBA Formats: 1 00:23:48.157 Current LBA Format: LBA Format #00 00:23:48.157 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:48.157 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:48.157 rmmod nvme_tcp 00:23:48.157 rmmod nvme_fabrics 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.157 11:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.062 11:42:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:50.062 11:42:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:50.062 11:42:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:50.062 11:42:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:23:50.320 11:42:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:50.320 11:42:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:50.320 11:42:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:50.320 11:42:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:50.320 11:42:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:50.320 11:42:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:50.320 11:42:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:51.695 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:51.695 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:51.695 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:51.695 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:51.695 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:51.695 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:51.695 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:51.695 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:51.695 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:51.695 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:51.695 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:51.695 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:51.695 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:51.695 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:51.695 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:51.695 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:52.632 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:23:52.632 00:23:52.632 real 0m9.966s 00:23:52.632 user 0m2.252s 00:23:52.632 sys 0m3.661s 00:23:52.632 11:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.632 11:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.632 ************************************ 00:23:52.632 END TEST nvmf_identify_kernel_target 00:23:52.632 ************************************ 00:23:52.632 11:42:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:52.632 11:42:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:52.632 11:42:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:52.632 11:42:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.632 ************************************ 00:23:52.632 START TEST nvmf_auth_host 00:23:52.632 ************************************ 00:23:52.632 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:52.890 * Looking for test storage... 00:23:52.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:52.890 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:52.890 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:52.890 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:52.890 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:52.890 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:52.890 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:52.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.891 --rc genhtml_branch_coverage=1 00:23:52.891 --rc genhtml_function_coverage=1 00:23:52.891 --rc genhtml_legend=1 00:23:52.891 --rc geninfo_all_blocks=1 00:23:52.891 --rc geninfo_unexecuted_blocks=1 00:23:52.891 00:23:52.891 ' 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:52.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.891 --rc genhtml_branch_coverage=1 00:23:52.891 --rc genhtml_function_coverage=1 00:23:52.891 --rc genhtml_legend=1 00:23:52.891 --rc geninfo_all_blocks=1 00:23:52.891 --rc geninfo_unexecuted_blocks=1 00:23:52.891 00:23:52.891 ' 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:52.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.891 --rc genhtml_branch_coverage=1 00:23:52.891 --rc genhtml_function_coverage=1 00:23:52.891 --rc genhtml_legend=1 00:23:52.891 --rc geninfo_all_blocks=1 00:23:52.891 --rc geninfo_unexecuted_blocks=1 00:23:52.891 00:23:52.891 ' 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:52.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.891 --rc genhtml_branch_coverage=1 00:23:52.891 --rc genhtml_function_coverage=1 00:23:52.891 --rc genhtml_legend=1 00:23:52.891 --rc geninfo_all_blocks=1 00:23:52.891 --rc geninfo_unexecuted_blocks=1 00:23:52.891 00:23:52.891 ' 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:52.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:52.891 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.892 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:52.892 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:52.892 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:52.892 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.892 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.892 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.892 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:52.892 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:52.892 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:52.892 11:42:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:55.423 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:55.423 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:55.423 Found net devices under 0000:09:00.0: cvl_0_0 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:55.423 Found net devices under 0000:09:00.1: cvl_0_1 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:55.423 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:55.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:23:55.424 00:23:55.424 --- 10.0.0.2 ping statistics --- 00:23:55.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.424 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:23:55.424 00:23:55.424 --- 10.0.0.1 ping statistics --- 00:23:55.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.424 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3020410 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3020410 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3020410 ']' 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9c28ce0645ee697d25648be6aec0c22c 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.d4X 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9c28ce0645ee697d25648be6aec0c22c 0 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9c28ce0645ee697d25648be6aec0c22c 0 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9c28ce0645ee697d25648be6aec0c22c 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.d4X 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.d4X 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.d4X 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:55.424 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5104cc793ef506a699af014030f4978341930331e959ccd4ecd9031d3fbc1efa 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ht3 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5104cc793ef506a699af014030f4978341930331e959ccd4ecd9031d3fbc1efa 3 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5104cc793ef506a699af014030f4978341930331e959ccd4ecd9031d3fbc1efa 3 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5104cc793ef506a699af014030f4978341930331e959ccd4ecd9031d3fbc1efa 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ht3 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ht3 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ht3 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9052bb6e178e8101f27c5a9c5045a26a7ab89c440aa4205f 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vkV 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9052bb6e178e8101f27c5a9c5045a26a7ab89c440aa4205f 0 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9052bb6e178e8101f27c5a9c5045a26a7ab89c440aa4205f 0 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9052bb6e178e8101f27c5a9c5045a26a7ab89c440aa4205f 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vkV 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vkV 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.vkV 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=64fe70d79d4e59099603114a55fd6329805a2bb3deb28922 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.k73 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 64fe70d79d4e59099603114a55fd6329805a2bb3deb28922 2 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 64fe70d79d4e59099603114a55fd6329805a2bb3deb28922 2 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=64fe70d79d4e59099603114a55fd6329805a2bb3deb28922 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.k73 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.k73 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.k73 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3cb270b89c6e0e12c9db9df85f76fcad 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.gs0 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3cb270b89c6e0e12c9db9df85f76fcad 1 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3cb270b89c6e0e12c9db9df85f76fcad 1 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3cb270b89c6e0e12c9db9df85f76fcad 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:55.683 11:42:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.gs0 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.gs0 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.gs0 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0ba7e44ee1d3db0026967290fcaf734b 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.BZH 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0ba7e44ee1d3db0026967290fcaf734b 1 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0ba7e44ee1d3db0026967290fcaf734b 1 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0ba7e44ee1d3db0026967290fcaf734b 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.BZH 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.BZH 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.BZH 00:23:55.683 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:55.684 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.684 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.684 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.684 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:55.684 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:55.684 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:55.684 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c8ac014f5d23b028c26b17d70bcc3eaac0ed0a0a2a841caf 00:23:55.684 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:55.684 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.oec 00:23:55.684 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c8ac014f5d23b028c26b17d70bcc3eaac0ed0a0a2a841caf 2 00:23:55.684 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c8ac014f5d23b028c26b17d70bcc3eaac0ed0a0a2a841caf 2 00:23:55.684 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.684 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.684 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c8ac014f5d23b028c26b17d70bcc3eaac0ed0a0a2a841caf 00:23:55.684 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:55.684 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.oec 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.oec 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.oec 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1b8cffe819bd6f5cf871efb3df94600e 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Hxz 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1b8cffe819bd6f5cf871efb3df94600e 0 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1b8cffe819bd6f5cf871efb3df94600e 0 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1b8cffe819bd6f5cf871efb3df94600e 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Hxz 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Hxz 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Hxz 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a2b070c22626b785059a6a1c7fd152dcae2340f9b91b4a86e3a81ef8d5315e19 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.SOS 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a2b070c22626b785059a6a1c7fd152dcae2340f9b91b4a86e3a81ef8d5315e19 3 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a2b070c22626b785059a6a1c7fd152dcae2340f9b91b4a86e3a81ef8d5315e19 3 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a2b070c22626b785059a6a1c7fd152dcae2340f9b91b4a86e3a81ef8d5315e19 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.SOS 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.SOS 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.SOS 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3020410 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3020410 ']' 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.942 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.d4X 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ht3 ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ht3 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.vkV 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.k73 ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k73 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.gs0 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.BZH ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BZH 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.oec 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Hxz ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Hxz 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.SOS 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:56.201 11:42:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:57.575 Waiting for block devices as requested 00:23:57.575 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:57.575 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:57.575 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:57.575 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:57.833 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:57.833 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:57.833 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:57.833 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:58.091 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:23:58.091 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:58.091 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:58.349 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:58.349 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:58.349 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:58.349 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:58.608 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:58.608 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:59.174 No valid GPT data, bailing 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:23:59.174 00:23:59.174 Discovery Log Number of Records 2, Generation counter 2 00:23:59.174 =====Discovery Log Entry 0====== 00:23:59.174 trtype: tcp 00:23:59.174 adrfam: ipv4 00:23:59.174 subtype: current discovery subsystem 00:23:59.174 treq: not specified, sq flow control disable supported 00:23:59.174 portid: 1 00:23:59.174 trsvcid: 4420 00:23:59.174 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:59.174 traddr: 10.0.0.1 00:23:59.174 eflags: none 00:23:59.174 sectype: none 00:23:59.174 =====Discovery Log Entry 1====== 00:23:59.174 trtype: tcp 00:23:59.174 adrfam: ipv4 00:23:59.174 subtype: nvme subsystem 00:23:59.174 treq: not specified, sq flow control disable supported 00:23:59.174 portid: 1 00:23:59.174 trsvcid: 4420 00:23:59.174 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:59.174 traddr: 10.0.0.1 00:23:59.174 eflags: none 00:23:59.174 sectype: none 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:59.174 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.175 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.433 nvme0n1 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: ]] 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.433 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:59.434 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.434 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.434 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.434 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.434 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.434 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.434 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.434 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.434 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.434 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.434 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.434 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.434 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.434 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.434 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.434 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.434 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.692 nvme0n1 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.692 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.693 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.693 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.693 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.693 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.693 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.693 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.693 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.693 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.693 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.693 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.693 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.693 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:59.693 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.693 11:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.951 nvme0n1 00:23:59.951 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.951 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: ]] 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.952 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.209 nvme0n1 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: ]] 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.209 nvme0n1 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.209 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.467 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.468 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:00.468 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.468 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.468 nvme0n1 00:24:00.468 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.468 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.468 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.468 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.468 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.468 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.468 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.468 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.468 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.468 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.726 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.726 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:00.726 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.726 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:00.726 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.726 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.726 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.726 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:00.726 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:00.726 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:00.726 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.726 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:00.726 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:00.726 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: ]] 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.727 11:42:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.727 nvme0n1 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.727 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.985 nvme0n1 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: ]] 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.985 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.243 nvme0n1 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: ]] 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.243 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.244 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:01.244 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:01.244 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:01.244 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.244 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.244 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:01.244 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.244 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:01.244 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:01.244 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:01.244 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:01.244 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.244 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.502 nvme0n1 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.502 11:42:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.761 nvme0n1 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: ]] 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.761 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.020 nvme0n1 00:24:02.020 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.020 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.020 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.020 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.020 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.020 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.278 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.537 nvme0n1 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: ]] 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.537 11:42:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.795 nvme0n1 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: ]] 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.795 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.796 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:02.796 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:02.796 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:02.796 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.796 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.796 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:02.796 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.796 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:02.796 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:02.796 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:02.796 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:02.796 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.796 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.054 nvme0n1 00:24:03.054 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.054 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.054 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.054 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.054 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.312 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.570 nvme0n1 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: ]] 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:03.570 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.571 11:42:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.138 nvme0n1 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.138 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.705 nvme0n1 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: ]] 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.705 11:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.272 nvme0n1 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: ]] 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.272 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.838 nvme0n1 00:24:05.838 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.838 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.838 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.838 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.838 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.838 11:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.838 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.405 nvme0n1 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: ]] 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:06.405 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.406 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.406 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.406 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.406 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:06.406 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:06.406 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:06.406 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.406 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.406 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:06.406 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.406 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:06.406 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:06.406 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:06.406 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:06.406 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.406 11:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.339 nvme0n1 00:24:07.339 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.339 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.339 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.339 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.339 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.339 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.339 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.340 11:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.287 nvme0n1 00:24:08.287 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: ]] 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.288 11:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.222 nvme0n1 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: ]] 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.222 11:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.788 nvme0n1 00:24:09.788 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.788 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.788 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.788 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.788 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.788 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:10.046 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.047 11:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.980 nvme0n1 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: ]] 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:10.980 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.981 nvme0n1 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.981 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.239 nvme0n1 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.239 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: ]] 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.240 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.498 nvme0n1 00:24:11.498 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.498 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.498 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.498 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.498 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.498 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.498 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.498 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.498 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.498 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.498 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: ]] 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.499 11:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.757 nvme0n1 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.757 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.758 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:11.758 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.758 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.016 nvme0n1 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: ]] 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.016 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.017 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:12.017 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.017 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.275 nvme0n1 00:24:12.275 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.275 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.275 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.275 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.275 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.275 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.275 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.275 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.276 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.533 nvme0n1 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: ]] 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.534 11:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.802 nvme0n1 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: ]] 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.802 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.124 nvme0n1 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.124 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.406 nvme0n1 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: ]] 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.406 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.407 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.407 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.407 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.407 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.407 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.407 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:13.407 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.407 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.665 nvme0n1 00:24:13.665 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.665 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.665 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.665 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.665 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.665 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.665 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.665 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.665 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.665 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.665 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.665 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.665 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:13.665 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.665 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.665 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.666 11:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.924 nvme0n1 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: ]] 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.924 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.183 nvme0n1 00:24:14.183 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.183 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.183 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.183 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.183 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.183 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.454 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.454 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.454 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.454 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.454 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.454 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.454 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:14.454 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.454 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.454 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: ]] 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.455 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.725 nvme0n1 00:24:14.725 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.725 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.725 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.725 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.725 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.726 11:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.984 nvme0n1 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: ]] 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.984 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.985 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.985 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.985 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.985 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.985 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.985 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.985 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.985 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.985 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.985 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.550 nvme0n1 00:24:15.550 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.550 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.550 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.550 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.550 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.550 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.550 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.550 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.550 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.550 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.550 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.550 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.550 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:15.550 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.550 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:15.550 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.551 11:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.116 nvme0n1 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: ]] 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:16.116 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.117 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.684 nvme0n1 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: ]] 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.684 11:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.249 nvme0n1 00:24:17.249 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.249 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.249 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.249 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.249 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.249 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.249 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.249 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.249 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.249 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.249 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.249 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.249 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.250 11:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.815 nvme0n1 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: ]] 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.815 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.816 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.816 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.816 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:17.816 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.816 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.750 nvme0n1 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.750 11:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.684 nvme0n1 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: ]] 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.684 11:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.618 nvme0n1 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: ]] 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.618 11:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.551 nvme0n1 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.551 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.552 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.552 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.552 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.552 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.552 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.552 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.552 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.552 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.552 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.552 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:21.552 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.552 11:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.485 nvme0n1 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: ]] 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.485 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.486 nvme0n1 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.486 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.745 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.745 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.745 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.745 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.745 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.745 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.745 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.745 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.745 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.745 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.745 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.745 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.745 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:22.745 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.745 11:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.745 nvme0n1 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: ]] 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.745 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.004 nvme0n1 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: ]] 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.004 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.262 nvme0n1 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:23.262 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.263 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.521 nvme0n1 00:24:23.521 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.521 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.521 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.521 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.521 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.521 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.521 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.521 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.521 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.521 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.521 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: ]] 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.522 11:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.780 nvme0n1 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:23.780 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.781 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.039 nvme0n1 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: ]] 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.039 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.298 nvme0n1 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.298 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: ]] 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.299 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.555 nvme0n1 00:24:24.555 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.555 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.555 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.555 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.556 11:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.814 nvme0n1 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: ]] 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.814 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.073 nvme0n1 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.073 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.074 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.074 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.640 nvme0n1 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:25.640 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: ]] 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.641 11:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.899 nvme0n1 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: ]] 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.899 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.157 nvme0n1 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.157 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.416 nvme0n1 00:24:26.416 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.416 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.416 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.416 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.416 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.416 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.416 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.416 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.416 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.416 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: ]] 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.674 11:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.932 nvme0n1 00:24:26.932 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.932 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.932 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.932 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.932 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:27.190 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.191 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.756 nvme0n1 00:24:27.756 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.756 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.756 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.756 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: ]] 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.757 11:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.323 nvme0n1 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: ]] 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.323 11:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.889 nvme0n1 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.889 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.147 nvme0n1 00:24:29.147 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.147 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.147 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.147 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.147 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.147 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWMyOGNlMDY0NWVlNjk3ZDI1NjQ4YmU2YWVjMGMyMmOOJoju: 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: ]] 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTEwNGNjNzkzZWY1MDZhNjk5YWYwMTQwMzBmNDk3ODM0MTkzMDMzMWU5NTljY2Q0ZWNkOTAzMWQzZmJjMWVmYaW/v4U=: 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.405 11:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.338 nvme0n1 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.338 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.339 11:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.273 nvme0n1 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: ]] 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.273 11:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.207 nvme0n1 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhhYzAxNGY1ZDIzYjAyOGMyNmIxN2Q3MGJjYzNlYWFjMGVkMGEwYTJhODQxY2FmygSEHg==: 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: ]] 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWI4Y2ZmZTgxOWJkNmY1Y2Y4NzFlZmIzZGY5NDYwMGWlvY8d: 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.207 11:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.141 nvme0n1 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJiMDcwYzIyNjI2Yjc4NTA1OWE2YTFjN2ZkMTUyZGNhZTIzNDBmOWI5MWI0YTg2ZTNhODFlZjhkNTMxNWUxOaH2Xnk=: 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.141 11:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.077 nvme0n1 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.077 request: 00:24:34.077 { 00:24:34.077 "name": "nvme0", 00:24:34.077 "trtype": "tcp", 00:24:34.077 "traddr": "10.0.0.1", 00:24:34.077 "adrfam": "ipv4", 00:24:34.077 "trsvcid": "4420", 00:24:34.077 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:34.077 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:34.077 "prchk_reftag": false, 00:24:34.077 "prchk_guard": false, 00:24:34.077 "hdgst": false, 00:24:34.077 "ddgst": false, 00:24:34.077 "allow_unrecognized_csi": false, 00:24:34.077 "method": "bdev_nvme_attach_controller", 00:24:34.077 "req_id": 1 00:24:34.077 } 00:24:34.077 Got JSON-RPC error response 00:24:34.077 response: 00:24:34.077 { 00:24:34.077 "code": -5, 00:24:34.077 "message": "Input/output error" 00:24:34.077 } 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:34.077 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.078 request: 00:24:34.078 { 00:24:34.078 "name": "nvme0", 00:24:34.078 "trtype": "tcp", 00:24:34.078 "traddr": "10.0.0.1", 00:24:34.078 "adrfam": "ipv4", 00:24:34.078 "trsvcid": "4420", 00:24:34.078 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:34.078 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:34.078 "prchk_reftag": false, 00:24:34.078 "prchk_guard": false, 00:24:34.078 "hdgst": false, 00:24:34.078 "ddgst": false, 00:24:34.078 "dhchap_key": "key2", 00:24:34.078 "allow_unrecognized_csi": false, 00:24:34.078 "method": "bdev_nvme_attach_controller", 00:24:34.078 "req_id": 1 00:24:34.078 } 00:24:34.078 Got JSON-RPC error response 00:24:34.078 response: 00:24:34.078 { 00:24:34.078 "code": -5, 00:24:34.078 "message": "Input/output error" 00:24:34.078 } 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.078 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.337 request: 00:24:34.337 { 00:24:34.337 "name": "nvme0", 00:24:34.337 "trtype": "tcp", 00:24:34.337 "traddr": "10.0.0.1", 00:24:34.337 "adrfam": "ipv4", 00:24:34.337 "trsvcid": "4420", 00:24:34.337 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:34.337 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:34.337 "prchk_reftag": false, 00:24:34.337 "prchk_guard": false, 00:24:34.337 "hdgst": false, 00:24:34.337 "ddgst": false, 00:24:34.337 "dhchap_key": "key1", 00:24:34.337 "dhchap_ctrlr_key": "ckey2", 00:24:34.337 "allow_unrecognized_csi": false, 00:24:34.337 "method": "bdev_nvme_attach_controller", 00:24:34.337 "req_id": 1 00:24:34.337 } 00:24:34.337 Got JSON-RPC error response 00:24:34.337 response: 00:24:34.337 { 00:24:34.337 "code": -5, 00:24:34.337 "message": "Input/output error" 00:24:34.337 } 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.337 nvme0n1 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: ]] 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.337 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.595 request: 00:24:34.595 { 00:24:34.595 "name": "nvme0", 00:24:34.595 "dhchap_key": "key1", 00:24:34.595 "dhchap_ctrlr_key": "ckey2", 00:24:34.595 "method": "bdev_nvme_set_keys", 00:24:34.595 "req_id": 1 00:24:34.595 } 00:24:34.595 Got JSON-RPC error response 00:24:34.595 response: 00:24:34.595 { 00:24:34.595 "code": -13, 00:24:34.595 "message": "Permission denied" 00:24:34.595 } 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:34.595 11:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:35.528 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.528 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:35.528 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.528 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.528 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.528 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:35.528 11:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTA1MmJiNmUxNzhlODEwMWYyN2M1YTljNTA0NWEyNmE3YWI4OWM0NDBhYTQyMDVmUnOYpA==: 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: ]] 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjRmZTcwZDc5ZDRlNTkwOTk2MDMxMTRhNTVmZDYzMjk4MDVhMmJiM2RlYjI4OTIyI+vj0g==: 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.902 11:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.902 nvme0n1 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:M2NiMjcwYjg5YzZlMGUxMmM5ZGI5ZGY4NWY3NmZjYWQo/Zhr: 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: ]] 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGJhN2U0NGVlMWQzZGIwMDI2OTY3MjkwZmNhZjczNGJAPkJW: 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.902 request: 00:24:36.902 { 00:24:36.902 "name": "nvme0", 00:24:36.902 "dhchap_key": "key2", 00:24:36.902 "dhchap_ctrlr_key": "ckey1", 00:24:36.902 "method": "bdev_nvme_set_keys", 00:24:36.902 "req_id": 1 00:24:36.902 } 00:24:36.902 Got JSON-RPC error response 00:24:36.902 response: 00:24:36.902 { 00:24:36.902 "code": -13, 00:24:36.902 "message": "Permission denied" 00:24:36.902 } 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:36.902 11:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:37.836 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.836 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:37.836 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.836 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.836 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:38.093 rmmod nvme_tcp 00:24:38.093 rmmod nvme_fabrics 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3020410 ']' 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3020410 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3020410 ']' 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3020410 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3020410 00:24:38.093 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:38.094 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:38.094 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3020410' 00:24:38.094 killing process with pid 3020410 00:24:38.094 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3020410 00:24:38.094 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3020410 00:24:38.351 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:38.351 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:38.351 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:38.351 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:24:38.351 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:24:38.351 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:38.351 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:38.351 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:38.351 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:38.351 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.351 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.351 11:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.255 11:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:40.255 11:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:40.255 11:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:40.255 11:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:40.255 11:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:40.255 11:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:24:40.255 11:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:40.255 11:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:40.255 11:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:40.255 11:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:40.255 11:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:40.255 11:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:40.255 11:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:41.634 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:41.634 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:41.634 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:41.634 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:41.634 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:41.634 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:41.634 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:41.634 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:41.634 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:41.634 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:41.634 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:41.634 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:41.634 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:41.634 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:41.634 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:41.634 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:42.570 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:24:42.828 11:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.d4X /tmp/spdk.key-null.vkV /tmp/spdk.key-sha256.gs0 /tmp/spdk.key-sha384.oec /tmp/spdk.key-sha512.SOS /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:42.828 11:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:44.232 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:44.232 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:44.232 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:44.232 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:44.232 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:44.232 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:44.232 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:44.232 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:44.232 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:44.232 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:44.232 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:44.232 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:44.232 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:44.232 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:44.232 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:44.232 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:44.232 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:44.232 00:24:44.232 real 0m51.449s 00:24:44.232 user 0m49.006s 00:24:44.232 sys 0m6.214s 00:24:44.232 11:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:44.232 11:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.232 ************************************ 00:24:44.232 END TEST nvmf_auth_host 00:24:44.232 ************************************ 00:24:44.232 11:43:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:24:44.232 11:43:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:44.232 11:43:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:44.232 11:43:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:44.232 11:43:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.232 ************************************ 00:24:44.232 START TEST nvmf_digest 00:24:44.232 ************************************ 00:24:44.232 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:44.232 * Looking for test storage... 00:24:44.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:44.232 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:44.232 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:24:44.232 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:44.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.529 --rc genhtml_branch_coverage=1 00:24:44.529 --rc genhtml_function_coverage=1 00:24:44.529 --rc genhtml_legend=1 00:24:44.529 --rc geninfo_all_blocks=1 00:24:44.529 --rc geninfo_unexecuted_blocks=1 00:24:44.529 00:24:44.529 ' 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:44.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.529 --rc genhtml_branch_coverage=1 00:24:44.529 --rc genhtml_function_coverage=1 00:24:44.529 --rc genhtml_legend=1 00:24:44.529 --rc geninfo_all_blocks=1 00:24:44.529 --rc geninfo_unexecuted_blocks=1 00:24:44.529 00:24:44.529 ' 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:44.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.529 --rc genhtml_branch_coverage=1 00:24:44.529 --rc genhtml_function_coverage=1 00:24:44.529 --rc genhtml_legend=1 00:24:44.529 --rc geninfo_all_blocks=1 00:24:44.529 --rc geninfo_unexecuted_blocks=1 00:24:44.529 00:24:44.529 ' 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:44.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.529 --rc genhtml_branch_coverage=1 00:24:44.529 --rc genhtml_function_coverage=1 00:24:44.529 --rc genhtml_legend=1 00:24:44.529 --rc geninfo_all_blocks=1 00:24:44.529 --rc geninfo_unexecuted_blocks=1 00:24:44.529 00:24:44.529 ' 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.529 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:44.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:24:44.530 11:43:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:46.439 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:46.439 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:46.439 Found net devices under 0000:09:00.0: cvl_0_0 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:46.439 Found net devices under 0000:09:00.1: cvl_0_1 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:46.439 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:46.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:24:46.699 00:24:46.699 --- 10.0.0.2 ping statistics --- 00:24:46.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.699 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:24:46.699 00:24:46.699 --- 10.0.0.1 ping statistics --- 00:24:46.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.699 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:46.699 ************************************ 00:24:46.699 START TEST nvmf_digest_clean 00:24:46.699 ************************************ 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3030027 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3030027 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3030027 ']' 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:46.699 11:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.699 [2024-11-15 11:43:27.007085] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:24:46.699 [2024-11-15 11:43:27.007177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.699 [2024-11-15 11:43:27.076811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.958 [2024-11-15 11:43:27.134563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.958 [2024-11-15 11:43:27.134625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.958 [2024-11-15 11:43:27.134638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.958 [2024-11-15 11:43:27.134649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.958 [2024-11-15 11:43:27.134658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.958 [2024-11-15 11:43:27.135202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.958 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:46.958 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:46.958 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:46.958 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:46.958 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.958 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.958 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:46.958 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:46.958 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:46.958 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.958 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.958 null0 00:24:46.958 [2024-11-15 11:43:27.366982] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.216 [2024-11-15 11:43:27.391206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.216 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.216 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:47.216 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:47.216 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:47.216 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:47.216 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:47.216 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:47.216 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:47.216 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3030051 00:24:47.216 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:47.216 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3030051 /var/tmp/bperf.sock 00:24:47.216 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3030051 ']' 00:24:47.216 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:47.216 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.216 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:47.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:47.216 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.216 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:47.216 [2024-11-15 11:43:27.439557] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:24:47.216 [2024-11-15 11:43:27.439631] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3030051 ] 00:24:47.216 [2024-11-15 11:43:27.505270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.216 [2024-11-15 11:43:27.564031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.474 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.474 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:47.475 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:47.475 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:47.475 11:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:47.731 11:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:47.731 11:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:48.297 nvme0n1 00:24:48.297 11:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:48.297 11:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:48.297 Running I/O for 2 seconds... 00:24:50.605 18689.00 IOPS, 73.00 MiB/s [2024-11-15T10:43:31.032Z] 18804.00 IOPS, 73.45 MiB/s 00:24:50.605 Latency(us) 00:24:50.605 [2024-11-15T10:43:31.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.605 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:50.605 nvme0n1 : 2.01 18820.29 73.52 0.00 0.00 6794.70 3470.98 14660.65 00:24:50.605 [2024-11-15T10:43:31.032Z] =================================================================================================================== 00:24:50.605 [2024-11-15T10:43:31.032Z] Total : 18820.29 73.52 0.00 0.00 6794.70 3470.98 14660.65 00:24:50.605 { 00:24:50.605 "results": [ 00:24:50.605 { 00:24:50.605 "job": "nvme0n1", 00:24:50.605 "core_mask": "0x2", 00:24:50.605 "workload": "randread", 00:24:50.605 "status": "finished", 00:24:50.605 "queue_depth": 128, 00:24:50.605 "io_size": 4096, 00:24:50.605 "runtime": 2.00507, 00:24:50.605 "iops": 18820.290563421728, 00:24:50.605 "mibps": 73.51676001336612, 00:24:50.605 "io_failed": 0, 00:24:50.605 "io_timeout": 0, 00:24:50.605 "avg_latency_us": 6794.703400623434, 00:24:50.605 "min_latency_us": 3470.9807407407407, 00:24:50.605 "max_latency_us": 14660.645925925926 00:24:50.605 } 00:24:50.605 ], 00:24:50.605 "core_count": 1 00:24:50.605 } 00:24:50.605 11:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:50.605 11:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:50.605 11:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:50.605 11:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:50.605 | select(.opcode=="crc32c") 00:24:50.605 | "\(.module_name) \(.executed)"' 00:24:50.605 11:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:50.605 11:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:50.605 11:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:50.605 11:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:50.605 11:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:50.605 11:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3030051 00:24:50.605 11:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3030051 ']' 00:24:50.605 11:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3030051 00:24:50.605 11:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:50.605 11:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.605 11:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3030051 00:24:50.605 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:50.605 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:50.605 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3030051' 00:24:50.605 killing process with pid 3030051 00:24:50.605 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3030051 00:24:50.605 Received shutdown signal, test time was about 2.000000 seconds 00:24:50.605 00:24:50.605 Latency(us) 00:24:50.605 [2024-11-15T10:43:31.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.605 [2024-11-15T10:43:31.032Z] =================================================================================================================== 00:24:50.605 [2024-11-15T10:43:31.032Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.605 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3030051 00:24:50.863 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:50.863 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:50.863 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:50.863 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:50.863 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:50.863 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:50.863 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:50.863 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3030580 00:24:50.863 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:50.863 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3030580 /var/tmp/bperf.sock 00:24:50.863 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3030580 ']' 00:24:50.863 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:50.863 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.863 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:50.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:50.863 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.864 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:50.864 [2024-11-15 11:43:31.279566] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:24:50.864 [2024-11-15 11:43:31.279659] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3030580 ] 00:24:50.864 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:50.864 Zero copy mechanism will not be used. 00:24:51.122 [2024-11-15 11:43:31.346529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.122 [2024-11-15 11:43:31.405453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.122 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.122 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:51.122 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:51.122 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:51.122 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:51.690 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:51.690 11:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:51.948 nvme0n1 00:24:52.206 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:52.206 11:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:52.206 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:52.206 Zero copy mechanism will not be used. 00:24:52.206 Running I/O for 2 seconds... 00:24:54.071 6183.00 IOPS, 772.88 MiB/s [2024-11-15T10:43:34.498Z] 6207.50 IOPS, 775.94 MiB/s 00:24:54.071 Latency(us) 00:24:54.071 [2024-11-15T10:43:34.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.071 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:54.071 nvme0n1 : 2.00 6209.41 776.18 0.00 0.00 2572.68 694.80 6650.69 00:24:54.071 [2024-11-15T10:43:34.498Z] =================================================================================================================== 00:24:54.071 [2024-11-15T10:43:34.498Z] Total : 6209.41 776.18 0.00 0.00 2572.68 694.80 6650.69 00:24:54.071 { 00:24:54.071 "results": [ 00:24:54.071 { 00:24:54.071 "job": "nvme0n1", 00:24:54.071 "core_mask": "0x2", 00:24:54.071 "workload": "randread", 00:24:54.071 "status": "finished", 00:24:54.071 "queue_depth": 16, 00:24:54.071 "io_size": 131072, 00:24:54.071 "runtime": 2.00196, 00:24:54.071 "iops": 6209.4147735219485, 00:24:54.071 "mibps": 776.1768466902436, 00:24:54.071 "io_failed": 0, 00:24:54.071 "io_timeout": 0, 00:24:54.071 "avg_latency_us": 2572.6792981703447, 00:24:54.071 "min_latency_us": 694.802962962963, 00:24:54.071 "max_latency_us": 6650.69037037037 00:24:54.071 } 00:24:54.071 ], 00:24:54.071 "core_count": 1 00:24:54.071 } 00:24:54.329 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:54.329 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:54.329 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:54.329 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:54.329 | select(.opcode=="crc32c") 00:24:54.329 | "\(.module_name) \(.executed)"' 00:24:54.329 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:54.587 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:54.587 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:54.587 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:54.587 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:54.587 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3030580 00:24:54.587 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3030580 ']' 00:24:54.587 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3030580 00:24:54.587 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:54.587 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.587 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3030580 00:24:54.587 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:54.587 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:54.587 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3030580' 00:24:54.587 killing process with pid 3030580 00:24:54.587 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3030580 00:24:54.587 Received shutdown signal, test time was about 2.000000 seconds 00:24:54.587 00:24:54.587 Latency(us) 00:24:54.587 [2024-11-15T10:43:35.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.587 [2024-11-15T10:43:35.014Z] =================================================================================================================== 00:24:54.587 [2024-11-15T10:43:35.014Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.587 11:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3030580 00:24:54.845 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:54.845 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:54.845 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:54.845 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:54.845 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:54.845 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:54.845 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:54.845 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3030987 00:24:54.845 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:54.845 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3030987 /var/tmp/bperf.sock 00:24:54.845 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3030987 ']' 00:24:54.845 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:54.845 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.845 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:54.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:54.845 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.845 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:54.845 [2024-11-15 11:43:35.088947] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:24:54.845 [2024-11-15 11:43:35.089042] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3030987 ] 00:24:54.845 [2024-11-15 11:43:35.163835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.845 [2024-11-15 11:43:35.226834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.103 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.103 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:55.103 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:55.103 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:55.103 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:55.361 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:55.361 11:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:55.927 nvme0n1 00:24:55.927 11:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:55.927 11:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:56.185 Running I/O for 2 seconds... 00:24:58.053 20561.00 IOPS, 80.32 MiB/s [2024-11-15T10:43:38.480Z] 20048.50 IOPS, 78.31 MiB/s 00:24:58.053 Latency(us) 00:24:58.053 [2024-11-15T10:43:38.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.053 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:58.053 nvme0n1 : 2.01 20047.68 78.31 0.00 0.00 6370.70 2682.12 8980.86 00:24:58.053 [2024-11-15T10:43:38.480Z] =================================================================================================================== 00:24:58.053 [2024-11-15T10:43:38.480Z] Total : 20047.68 78.31 0.00 0.00 6370.70 2682.12 8980.86 00:24:58.053 { 00:24:58.053 "results": [ 00:24:58.053 { 00:24:58.053 "job": "nvme0n1", 00:24:58.053 "core_mask": "0x2", 00:24:58.053 "workload": "randwrite", 00:24:58.053 "status": "finished", 00:24:58.053 "queue_depth": 128, 00:24:58.053 "io_size": 4096, 00:24:58.053 "runtime": 2.008063, 00:24:58.053 "iops": 20047.677787001703, 00:24:58.053 "mibps": 78.3112413554754, 00:24:58.053 "io_failed": 0, 00:24:58.053 "io_timeout": 0, 00:24:58.053 "avg_latency_us": 6370.703750771662, 00:24:58.053 "min_latency_us": 2682.1214814814816, 00:24:58.053 "max_latency_us": 8980.85925925926 00:24:58.053 } 00:24:58.053 ], 00:24:58.053 "core_count": 1 00:24:58.053 } 00:24:58.053 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:58.053 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:58.053 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:58.053 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:58.053 | select(.opcode=="crc32c") 00:24:58.053 | "\(.module_name) \(.executed)"' 00:24:58.053 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:58.311 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:58.311 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:58.311 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:58.311 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:58.311 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3030987 00:24:58.311 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3030987 ']' 00:24:58.311 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3030987 00:24:58.311 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:58.311 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.311 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3030987 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3030987' 00:24:58.569 killing process with pid 3030987 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3030987 00:24:58.569 Received shutdown signal, test time was about 2.000000 seconds 00:24:58.569 00:24:58.569 Latency(us) 00:24:58.569 [2024-11-15T10:43:38.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.569 [2024-11-15T10:43:38.996Z] =================================================================================================================== 00:24:58.569 [2024-11-15T10:43:38.996Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3030987 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3031516 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3031516 /var/tmp/bperf.sock 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3031516 ']' 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:58.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.569 11:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:58.827 [2024-11-15 11:43:39.009807] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:24:58.827 [2024-11-15 11:43:39.009888] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3031516 ] 00:24:58.827 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:58.827 Zero copy mechanism will not be used. 00:24:58.827 [2024-11-15 11:43:39.075967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.827 [2024-11-15 11:43:39.134785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.827 11:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.828 11:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:58.828 11:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:58.828 11:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:58.828 11:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:59.394 11:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.394 11:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.653 nvme0n1 00:24:59.653 11:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:59.653 11:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:59.653 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:59.653 Zero copy mechanism will not be used. 00:24:59.653 Running I/O for 2 seconds... 00:25:01.958 4748.00 IOPS, 593.50 MiB/s [2024-11-15T10:43:42.385Z] 4876.50 IOPS, 609.56 MiB/s 00:25:01.958 Latency(us) 00:25:01.958 [2024-11-15T10:43:42.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.958 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:01.958 nvme0n1 : 2.00 4874.54 609.32 0.00 0.00 3274.53 2645.71 11893.57 00:25:01.958 [2024-11-15T10:43:42.385Z] =================================================================================================================== 00:25:01.958 [2024-11-15T10:43:42.385Z] Total : 4874.54 609.32 0.00 0.00 3274.53 2645.71 11893.57 00:25:01.958 { 00:25:01.958 "results": [ 00:25:01.958 { 00:25:01.958 "job": "nvme0n1", 00:25:01.958 "core_mask": "0x2", 00:25:01.958 "workload": "randwrite", 00:25:01.958 "status": "finished", 00:25:01.958 "queue_depth": 16, 00:25:01.958 "io_size": 131072, 00:25:01.958 "runtime": 2.004906, 00:25:01.958 "iops": 4874.542746642486, 00:25:01.958 "mibps": 609.3178433303108, 00:25:01.958 "io_failed": 0, 00:25:01.958 "io_timeout": 0, 00:25:01.958 "avg_latency_us": 3274.528713196979, 00:25:01.958 "min_latency_us": 2645.7125925925925, 00:25:01.958 "max_latency_us": 11893.570370370371 00:25:01.958 } 00:25:01.958 ], 00:25:01.958 "core_count": 1 00:25:01.958 } 00:25:01.958 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:01.958 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:01.958 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:01.958 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:01.958 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:01.958 | select(.opcode=="crc32c") 00:25:01.958 | "\(.module_name) \(.executed)"' 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3031516 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3031516 ']' 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3031516 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3031516 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3031516' 00:25:02.216 killing process with pid 3031516 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3031516 00:25:02.216 Received shutdown signal, test time was about 2.000000 seconds 00:25:02.216 00:25:02.216 Latency(us) 00:25:02.216 [2024-11-15T10:43:42.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.216 [2024-11-15T10:43:42.643Z] =================================================================================================================== 00:25:02.216 [2024-11-15T10:43:42.643Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3031516 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3030027 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3030027 ']' 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3030027 00:25:02.216 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:02.475 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:02.475 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3030027 00:25:02.475 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:02.475 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:02.475 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3030027' 00:25:02.475 killing process with pid 3030027 00:25:02.475 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3030027 00:25:02.475 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3030027 00:25:02.475 00:25:02.475 real 0m15.927s 00:25:02.475 user 0m32.119s 00:25:02.475 sys 0m4.242s 00:25:02.475 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:02.475 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:02.475 ************************************ 00:25:02.475 END TEST nvmf_digest_clean 00:25:02.475 ************************************ 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:02.733 ************************************ 00:25:02.733 START TEST nvmf_digest_error 00:25:02.733 ************************************ 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3031951 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3031951 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3031951 ']' 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.733 11:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.733 [2024-11-15 11:43:42.983850] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:25:02.733 [2024-11-15 11:43:42.983935] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.733 [2024-11-15 11:43:43.056528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.733 [2024-11-15 11:43:43.113877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.733 [2024-11-15 11:43:43.113944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.733 [2024-11-15 11:43:43.113958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.733 [2024-11-15 11:43:43.113982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.733 [2024-11-15 11:43:43.114004] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.733 [2024-11-15 11:43:43.114634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.991 [2024-11-15 11:43:43.251393] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.991 null0 00:25:02.991 [2024-11-15 11:43:43.365452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.991 [2024-11-15 11:43:43.389689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3032076 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3032076 /var/tmp/bperf.sock 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3032076 ']' 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.991 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:02.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:02.992 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.992 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:03.250 [2024-11-15 11:43:43.441011] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:25:03.250 [2024-11-15 11:43:43.441111] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3032076 ] 00:25:03.250 [2024-11-15 11:43:43.510681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.250 [2024-11-15 11:43:43.572229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.508 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.508 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:03.508 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:03.508 11:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:03.766 11:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:03.766 11:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.766 11:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:03.766 11:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.766 11:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:03.766 11:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:04.332 nvme0n1 00:25:04.332 11:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:04.332 11:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.332 11:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:04.332 11:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.332 11:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:04.332 11:43:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:04.332 Running I/O for 2 seconds... 00:25:04.332 [2024-11-15 11:43:44.656567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.332 [2024-11-15 11:43:44.656625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.332 [2024-11-15 11:43:44.656644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.332 [2024-11-15 11:43:44.668948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.332 [2024-11-15 11:43:44.668977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.332 [2024-11-15 11:43:44.669007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.332 [2024-11-15 11:43:44.680103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.332 [2024-11-15 11:43:44.680133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.332 [2024-11-15 11:43:44.680164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.332 [2024-11-15 11:43:44.694787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.332 [2024-11-15 11:43:44.694818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.332 [2024-11-15 11:43:44.694836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.332 [2024-11-15 11:43:44.709221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.332 [2024-11-15 11:43:44.709253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.332 [2024-11-15 11:43:44.709271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.332 [2024-11-15 11:43:44.722114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.332 [2024-11-15 11:43:44.722144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.332 [2024-11-15 11:43:44.722161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.332 [2024-11-15 11:43:44.733938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.332 [2024-11-15 11:43:44.733967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.332 [2024-11-15 11:43:44.734006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.332 [2024-11-15 11:43:44.747886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.332 [2024-11-15 11:43:44.747914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.332 [2024-11-15 11:43:44.747930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.590 [2024-11-15 11:43:44.763432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.590 [2024-11-15 11:43:44.763461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.590 [2024-11-15 11:43:44.763493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.590 [2024-11-15 11:43:44.777944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.590 [2024-11-15 11:43:44.777987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.590 [2024-11-15 11:43:44.778002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.590 [2024-11-15 11:43:44.793111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.591 [2024-11-15 11:43:44.793139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.591 [2024-11-15 11:43:44.793155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.591 [2024-11-15 11:43:44.804068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.591 [2024-11-15 11:43:44.804095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.591 [2024-11-15 11:43:44.804125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.591 [2024-11-15 11:43:44.818925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.591 [2024-11-15 11:43:44.818952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.591 [2024-11-15 11:43:44.818981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.591 [2024-11-15 11:43:44.832285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.591 [2024-11-15 11:43:44.832324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.591 [2024-11-15 11:43:44.832343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.591 [2024-11-15 11:43:44.843277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.591 [2024-11-15 11:43:44.843329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.591 [2024-11-15 11:43:44.843345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.591 [2024-11-15 11:43:44.856901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.591 [2024-11-15 11:43:44.856935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.591 [2024-11-15 11:43:44.856968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.591 [2024-11-15 11:43:44.872687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.591 [2024-11-15 11:43:44.872714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.591 [2024-11-15 11:43:44.872745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.591 [2024-11-15 11:43:44.888455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.591 [2024-11-15 11:43:44.888485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.591 [2024-11-15 11:43:44.888515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.591 [2024-11-15 11:43:44.902835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.591 [2024-11-15 11:43:44.902866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.591 [2024-11-15 11:43:44.902884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.591 [2024-11-15 11:43:44.914231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.591 [2024-11-15 11:43:44.914258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.591 [2024-11-15 11:43:44.914288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.591 [2024-11-15 11:43:44.929403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.591 [2024-11-15 11:43:44.929432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.591 [2024-11-15 11:43:44.929463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.591 [2024-11-15 11:43:44.943964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.591 [2024-11-15 11:43:44.943993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.591 [2024-11-15 11:43:44.944009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.591 [2024-11-15 11:43:44.959551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.591 [2024-11-15 11:43:44.959582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.591 [2024-11-15 11:43:44.959599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.591 [2024-11-15 11:43:44.969902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.591 [2024-11-15 11:43:44.969930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.591 [2024-11-15 11:43:44.969961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.591 [2024-11-15 11:43:44.983941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.591 [2024-11-15 11:43:44.983968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.591 [2024-11-15 11:43:44.983998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.591 [2024-11-15 11:43:44.999294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.591 [2024-11-15 11:43:44.999356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.591 [2024-11-15 11:43:44.999372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.849 [2024-11-15 11:43:45.015778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.849 [2024-11-15 11:43:45.015806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.849 [2024-11-15 11:43:45.015822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.849 [2024-11-15 11:43:45.031438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.849 [2024-11-15 11:43:45.031468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.849 [2024-11-15 11:43:45.031500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.849 [2024-11-15 11:43:45.045577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.849 [2024-11-15 11:43:45.045608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.849 [2024-11-15 11:43:45.045626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.849 [2024-11-15 11:43:45.056844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.849 [2024-11-15 11:43:45.056871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.849 [2024-11-15 11:43:45.056901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.849 [2024-11-15 11:43:45.071707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.849 [2024-11-15 11:43:45.071735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.849 [2024-11-15 11:43:45.071751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.849 [2024-11-15 11:43:45.085572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.849 [2024-11-15 11:43:45.085615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.849 [2024-11-15 11:43:45.085631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.849 [2024-11-15 11:43:45.100046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.850 [2024-11-15 11:43:45.100076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.850 [2024-11-15 11:43:45.100114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.850 [2024-11-15 11:43:45.116391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.850 [2024-11-15 11:43:45.116421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.850 [2024-11-15 11:43:45.116438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.850 [2024-11-15 11:43:45.128053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.850 [2024-11-15 11:43:45.128083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.850 [2024-11-15 11:43:45.128100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.850 [2024-11-15 11:43:45.143450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.850 [2024-11-15 11:43:45.143492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.850 [2024-11-15 11:43:45.143507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.850 [2024-11-15 11:43:45.155933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.850 [2024-11-15 11:43:45.155959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.850 [2024-11-15 11:43:45.155988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.850 [2024-11-15 11:43:45.170554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.850 [2024-11-15 11:43:45.170585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.850 [2024-11-15 11:43:45.170602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.850 [2024-11-15 11:43:45.186209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.850 [2024-11-15 11:43:45.186241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.850 [2024-11-15 11:43:45.186258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.850 [2024-11-15 11:43:45.198524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.850 [2024-11-15 11:43:45.198554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.850 [2024-11-15 11:43:45.198586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.850 [2024-11-15 11:43:45.210533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.850 [2024-11-15 11:43:45.210563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.850 [2024-11-15 11:43:45.210594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.850 [2024-11-15 11:43:45.223986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.850 [2024-11-15 11:43:45.224014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.850 [2024-11-15 11:43:45.224044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.850 [2024-11-15 11:43:45.240202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.850 [2024-11-15 11:43:45.240229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.850 [2024-11-15 11:43:45.240261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.850 [2024-11-15 11:43:45.254952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.850 [2024-11-15 11:43:45.254981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.850 [2024-11-15 11:43:45.255011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.850 [2024-11-15 11:43:45.271072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:04.850 [2024-11-15 11:43:45.271103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.850 [2024-11-15 11:43:45.271120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.107 [2024-11-15 11:43:45.282992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.107 [2024-11-15 11:43:45.283018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.283049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.108 [2024-11-15 11:43:45.295365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.108 [2024-11-15 11:43:45.295394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.295423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.108 [2024-11-15 11:43:45.308238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.108 [2024-11-15 11:43:45.308265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.308296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.108 [2024-11-15 11:43:45.323436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.108 [2024-11-15 11:43:45.323464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.323494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.108 [2024-11-15 11:43:45.334680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.108 [2024-11-15 11:43:45.334707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.334742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.108 [2024-11-15 11:43:45.349826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.108 [2024-11-15 11:43:45.349854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.349885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.108 [2024-11-15 11:43:45.365724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.108 [2024-11-15 11:43:45.365753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.365769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.108 [2024-11-15 11:43:45.375887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.108 [2024-11-15 11:43:45.375915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.375945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.108 [2024-11-15 11:43:45.391273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.108 [2024-11-15 11:43:45.391324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.391341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.108 [2024-11-15 11:43:45.407581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.108 [2024-11-15 11:43:45.407613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.407630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.108 [2024-11-15 11:43:45.422220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.108 [2024-11-15 11:43:45.422250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.422283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.108 [2024-11-15 11:43:45.433363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.108 [2024-11-15 11:43:45.433391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.433423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.108 [2024-11-15 11:43:45.449689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.108 [2024-11-15 11:43:45.449719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.449750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.108 [2024-11-15 11:43:45.463268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.108 [2024-11-15 11:43:45.463311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.463331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.108 [2024-11-15 11:43:45.477916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.108 [2024-11-15 11:43:45.477947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.477964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.108 [2024-11-15 11:43:45.489582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.108 [2024-11-15 11:43:45.489613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.489630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.108 [2024-11-15 11:43:45.505408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.108 [2024-11-15 11:43:45.505440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.505457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.108 [2024-11-15 11:43:45.519138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.108 [2024-11-15 11:43:45.519166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.108 [2024-11-15 11:43:45.519196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.365 [2024-11-15 11:43:45.533736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.365 [2024-11-15 11:43:45.533767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.365 [2024-11-15 11:43:45.533798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.365 [2024-11-15 11:43:45.548591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.365 [2024-11-15 11:43:45.548637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.365 [2024-11-15 11:43:45.548653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.365 [2024-11-15 11:43:45.559844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.365 [2024-11-15 11:43:45.559872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.365 [2024-11-15 11:43:45.559902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.365 [2024-11-15 11:43:45.574940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.365 [2024-11-15 11:43:45.574970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.365 [2024-11-15 11:43:45.575003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.365 [2024-11-15 11:43:45.586235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.365 [2024-11-15 11:43:45.586262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.365 [2024-11-15 11:43:45.586291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.365 [2024-11-15 11:43:45.599217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.365 [2024-11-15 11:43:45.599248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.365 [2024-11-15 11:43:45.599265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.365 [2024-11-15 11:43:45.612843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.365 [2024-11-15 11:43:45.612889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.365 [2024-11-15 11:43:45.612905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.365 [2024-11-15 11:43:45.627953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.365 [2024-11-15 11:43:45.627999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.365 [2024-11-15 11:43:45.628016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.365 18240.00 IOPS, 71.25 MiB/s [2024-11-15T10:43:45.792Z] [2024-11-15 11:43:45.640865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.365 [2024-11-15 11:43:45.640893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.365 [2024-11-15 11:43:45.640924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.365 [2024-11-15 11:43:45.655041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.365 [2024-11-15 11:43:45.655069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.365 [2024-11-15 11:43:45.655086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.365 [2024-11-15 11:43:45.669020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.365 [2024-11-15 11:43:45.669047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.365 [2024-11-15 11:43:45.669078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.365 [2024-11-15 11:43:45.679711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.365 [2024-11-15 11:43:45.679737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.365 [2024-11-15 11:43:45.679768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.365 [2024-11-15 11:43:45.694124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.365 [2024-11-15 11:43:45.694151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.365 [2024-11-15 11:43:45.694186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.365 [2024-11-15 11:43:45.709349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.365 [2024-11-15 11:43:45.709376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.365 [2024-11-15 11:43:45.709407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.365 [2024-11-15 11:43:45.721617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.365 [2024-11-15 11:43:45.721647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.365 [2024-11-15 11:43:45.721664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.365 [2024-11-15 11:43:45.735721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.365 [2024-11-15 11:43:45.735749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.366 [2024-11-15 11:43:45.735780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.366 [2024-11-15 11:43:45.748258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.366 [2024-11-15 11:43:45.748309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.366 [2024-11-15 11:43:45.748329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.366 [2024-11-15 11:43:45.764327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.366 [2024-11-15 11:43:45.764358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.366 [2024-11-15 11:43:45.764375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.366 [2024-11-15 11:43:45.776654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.366 [2024-11-15 11:43:45.776685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.366 [2024-11-15 11:43:45.776702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.366 [2024-11-15 11:43:45.788004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.366 [2024-11-15 11:43:45.788032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.366 [2024-11-15 11:43:45.788063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.623 [2024-11-15 11:43:45.803745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.623 [2024-11-15 11:43:45.803772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.623 [2024-11-15 11:43:45.803801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.623 [2024-11-15 11:43:45.819774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.623 [2024-11-15 11:43:45.819806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.623 [2024-11-15 11:43:45.819836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.623 [2024-11-15 11:43:45.835201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.624 [2024-11-15 11:43:45.835232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.624 [2024-11-15 11:43:45.835248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.624 [2024-11-15 11:43:45.846130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.624 [2024-11-15 11:43:45.846157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.624 [2024-11-15 11:43:45.846188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.624 [2024-11-15 11:43:45.861967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.624 [2024-11-15 11:43:45.861994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.624 [2024-11-15 11:43:45.862024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.624 [2024-11-15 11:43:45.875755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.624 [2024-11-15 11:43:45.875784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.624 [2024-11-15 11:43:45.875815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.624 [2024-11-15 11:43:45.890137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.624 [2024-11-15 11:43:45.890181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.624 [2024-11-15 11:43:45.890196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.624 [2024-11-15 11:43:45.902132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.624 [2024-11-15 11:43:45.902176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.624 [2024-11-15 11:43:45.902192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.624 [2024-11-15 11:43:45.916720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.624 [2024-11-15 11:43:45.916747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.624 [2024-11-15 11:43:45.916778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.624 [2024-11-15 11:43:45.933087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.624 [2024-11-15 11:43:45.933115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.624 [2024-11-15 11:43:45.933131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.624 [2024-11-15 11:43:45.945899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.624 [2024-11-15 11:43:45.945929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.624 [2024-11-15 11:43:45.945946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.624 [2024-11-15 11:43:45.961737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.624 [2024-11-15 11:43:45.961765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.624 [2024-11-15 11:43:45.961797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.624 [2024-11-15 11:43:45.975623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.624 [2024-11-15 11:43:45.975652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.624 [2024-11-15 11:43:45.975669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.624 [2024-11-15 11:43:45.987230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.624 [2024-11-15 11:43:45.987257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.624 [2024-11-15 11:43:45.987286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.624 [2024-11-15 11:43:46.002228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.624 [2024-11-15 11:43:46.002257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.624 [2024-11-15 11:43:46.002288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.624 [2024-11-15 11:43:46.015174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.624 [2024-11-15 11:43:46.015204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.624 [2024-11-15 11:43:46.015220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.624 [2024-11-15 11:43:46.031324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.624 [2024-11-15 11:43:46.031362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.624 [2024-11-15 11:43:46.031379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.624 [2024-11-15 11:43:46.042584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.624 [2024-11-15 11:43:46.042629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.624 [2024-11-15 11:43:46.042646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.882 [2024-11-15 11:43:46.055008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.055037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.055061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.883 [2024-11-15 11:43:46.069403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.069433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.069449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.883 [2024-11-15 11:43:46.080540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.080569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.080599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.883 [2024-11-15 11:43:46.095978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.096008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.096024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.883 [2024-11-15 11:43:46.109968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.109996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.110028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.883 [2024-11-15 11:43:46.123270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.123300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.123326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.883 [2024-11-15 11:43:46.134821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.134850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.134882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.883 [2024-11-15 11:43:46.151082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.151111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.151143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.883 [2024-11-15 11:43:46.165664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.165693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.165724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.883 [2024-11-15 11:43:46.181999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.182030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.182047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.883 [2024-11-15 11:43:46.195119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.195149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.195166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.883 [2024-11-15 11:43:46.206701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.206729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.206760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.883 [2024-11-15 11:43:46.221957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.221986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.222017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.883 [2024-11-15 11:43:46.238159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.238188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.238218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.883 [2024-11-15 11:43:46.253427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.253456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.253487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.883 [2024-11-15 11:43:46.270186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.270217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.270234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.883 [2024-11-15 11:43:46.281647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.281675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.281692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.883 [2024-11-15 11:43:46.296651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:05.883 [2024-11-15 11:43:46.296694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.883 [2024-11-15 11:43:46.296717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.312061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.312092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.312109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.326877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.326908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.326924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.343648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.343679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.343696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.354198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.354243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.354260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.367688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.367716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.367746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.383093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.383121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.383153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.399848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.399876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.399907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.414189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.414219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.414237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.430399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.430442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.430461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.445967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.445998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.446015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.458062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.458092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.458109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.472943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.472972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.473003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.487077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.487106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.487123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.498090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.498118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.498148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.513242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.513270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.513300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.528084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.528115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.528148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.540239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.540267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.540298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.142 [2024-11-15 11:43:46.553725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.142 [2024-11-15 11:43:46.553756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.142 [2024-11-15 11:43:46.553773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.401 [2024-11-15 11:43:46.566710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.401 [2024-11-15 11:43:46.566741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.401 [2024-11-15 11:43:46.566757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.401 [2024-11-15 11:43:46.579524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.401 [2024-11-15 11:43:46.579554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.401 [2024-11-15 11:43:46.579587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.401 [2024-11-15 11:43:46.591423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.401 [2024-11-15 11:43:46.591453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.401 [2024-11-15 11:43:46.591470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.401 [2024-11-15 11:43:46.605064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.401 [2024-11-15 11:43:46.605095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.401 [2024-11-15 11:43:46.605112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.401 [2024-11-15 11:43:46.621547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.401 [2024-11-15 11:43:46.621578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.401 [2024-11-15 11:43:46.621596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.401 [2024-11-15 11:43:46.633198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd92bf0) 00:25:06.401 [2024-11-15 11:43:46.633229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.401 [2024-11-15 11:43:46.633246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.401 18249.50 IOPS, 71.29 MiB/s 00:25:06.401 Latency(us) 00:25:06.401 [2024-11-15T10:43:46.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.401 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:06.401 nvme0n1 : 2.00 18270.11 71.37 0.00 0.00 6999.14 3543.80 23107.51 00:25:06.401 [2024-11-15T10:43:46.828Z] =================================================================================================================== 00:25:06.401 [2024-11-15T10:43:46.828Z] Total : 18270.11 71.37 0.00 0.00 6999.14 3543.80 23107.51 00:25:06.401 { 00:25:06.401 "results": [ 00:25:06.401 { 00:25:06.401 "job": "nvme0n1", 00:25:06.401 "core_mask": "0x2", 00:25:06.401 "workload": "randread", 00:25:06.401 "status": "finished", 00:25:06.401 "queue_depth": 128, 00:25:06.401 "io_size": 4096, 00:25:06.401 "runtime": 2.00475, 00:25:06.401 "iops": 18270.108492330713, 00:25:06.401 "mibps": 71.36761129816685, 00:25:06.401 "io_failed": 0, 00:25:06.401 "io_timeout": 0, 00:25:06.401 "avg_latency_us": 6999.139616777342, 00:25:06.401 "min_latency_us": 3543.7985185185184, 00:25:06.401 "max_latency_us": 23107.508148148147 00:25:06.401 } 00:25:06.401 ], 00:25:06.401 "core_count": 1 00:25:06.401 } 00:25:06.401 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:06.401 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:06.401 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:06.401 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:06.401 | .driver_specific 00:25:06.401 | .nvme_error 00:25:06.401 | .status_code 00:25:06.401 | .command_transient_transport_error' 00:25:06.657 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:25:06.657 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3032076 00:25:06.657 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3032076 ']' 00:25:06.657 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3032076 00:25:06.657 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:06.657 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.657 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3032076 00:25:06.657 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:06.657 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:06.657 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3032076' 00:25:06.657 killing process with pid 3032076 00:25:06.657 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3032076 00:25:06.657 Received shutdown signal, test time was about 2.000000 seconds 00:25:06.657 00:25:06.657 Latency(us) 00:25:06.657 [2024-11-15T10:43:47.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.657 [2024-11-15T10:43:47.084Z] =================================================================================================================== 00:25:06.657 [2024-11-15T10:43:47.084Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.657 11:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3032076 00:25:06.913 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:06.913 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:06.913 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:06.913 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:06.913 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:06.913 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3032504 00:25:06.913 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:06.913 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3032504 /var/tmp/bperf.sock 00:25:06.913 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3032504 ']' 00:25:06.913 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:06.913 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.913 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:06.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:06.913 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.913 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.913 [2024-11-15 11:43:47.255761] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:25:06.913 [2024-11-15 11:43:47.255843] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3032504 ] 00:25:06.913 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:06.913 Zero copy mechanism will not be used. 00:25:06.913 [2024-11-15 11:43:47.320640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.171 [2024-11-15 11:43:47.377334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.171 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.171 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:07.171 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:07.171 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:07.429 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:07.429 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.429 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:07.429 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.429 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:07.429 11:43:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:07.995 nvme0n1 00:25:07.995 11:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:07.995 11:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.995 11:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:07.995 11:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.995 11:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:07.995 11:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:07.995 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:07.995 Zero copy mechanism will not be used. 00:25:07.995 Running I/O for 2 seconds... 00:25:07.995 [2024-11-15 11:43:48.267901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.995 [2024-11-15 11:43:48.267948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.995 [2024-11-15 11:43:48.267980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.995 [2024-11-15 11:43:48.272890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.272925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.272943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.277606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.277638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.277655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.282245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.282276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.282293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.286599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.286630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.286647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.289592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.289621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.289638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.294111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.294139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.294155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.299170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.299200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.299218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.303770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.303800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.303817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.308218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.308247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.308263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.313481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.313512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.313530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.318633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.318663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.318695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.323387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.323417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.323434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.329178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.329224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.329241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.333760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.333792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.333809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.337281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.337320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.337339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.341157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.341188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.341206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.346073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.346105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.346128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.350502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.350533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.350551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.354168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.354198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.354214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.359219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.359251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.996 [2024-11-15 11:43:48.359269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.996 [2024-11-15 11:43:48.363899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.996 [2024-11-15 11:43:48.363929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.997 [2024-11-15 11:43:48.363960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.997 [2024-11-15 11:43:48.369287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.997 [2024-11-15 11:43:48.369326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.997 [2024-11-15 11:43:48.369344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.997 [2024-11-15 11:43:48.374405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.997 [2024-11-15 11:43:48.374435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.997 [2024-11-15 11:43:48.374467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.997 [2024-11-15 11:43:48.380563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.997 [2024-11-15 11:43:48.380610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.997 [2024-11-15 11:43:48.380627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.997 [2024-11-15 11:43:48.385908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.997 [2024-11-15 11:43:48.385955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.997 [2024-11-15 11:43:48.385971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.997 [2024-11-15 11:43:48.391356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.997 [2024-11-15 11:43:48.391391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.997 [2024-11-15 11:43:48.391409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.997 [2024-11-15 11:43:48.396726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.997 [2024-11-15 11:43:48.396771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.997 [2024-11-15 11:43:48.396788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.997 [2024-11-15 11:43:48.401762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.997 [2024-11-15 11:43:48.401806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.997 [2024-11-15 11:43:48.401823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.997 [2024-11-15 11:43:48.407102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.997 [2024-11-15 11:43:48.407133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.997 [2024-11-15 11:43:48.407150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.997 [2024-11-15 11:43:48.412770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.997 [2024-11-15 11:43:48.412801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.997 [2024-11-15 11:43:48.412819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.997 [2024-11-15 11:43:48.417846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:07.997 [2024-11-15 11:43:48.417877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.997 [2024-11-15 11:43:48.417895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.256 [2024-11-15 11:43:48.422936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.256 [2024-11-15 11:43:48.422967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.256 [2024-11-15 11:43:48.422985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.256 [2024-11-15 11:43:48.428770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.256 [2024-11-15 11:43:48.428816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.256 [2024-11-15 11:43:48.428833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.256 [2024-11-15 11:43:48.434448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.256 [2024-11-15 11:43:48.434479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.256 [2024-11-15 11:43:48.434496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.256 [2024-11-15 11:43:48.440968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.256 [2024-11-15 11:43:48.441000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.256 [2024-11-15 11:43:48.441018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.256 [2024-11-15 11:43:48.446804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.256 [2024-11-15 11:43:48.446849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.256 [2024-11-15 11:43:48.446867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.256 [2024-11-15 11:43:48.452153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.256 [2024-11-15 11:43:48.452183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.256 [2024-11-15 11:43:48.452216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.256 [2024-11-15 11:43:48.458245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.256 [2024-11-15 11:43:48.458292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.256 [2024-11-15 11:43:48.458317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.256 [2024-11-15 11:43:48.464464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.256 [2024-11-15 11:43:48.464495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.256 [2024-11-15 11:43:48.464512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.256 [2024-11-15 11:43:48.470519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.256 [2024-11-15 11:43:48.470550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.256 [2024-11-15 11:43:48.470568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.256 [2024-11-15 11:43:48.475620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.256 [2024-11-15 11:43:48.475651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.256 [2024-11-15 11:43:48.475669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.256 [2024-11-15 11:43:48.481199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.256 [2024-11-15 11:43:48.481230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.256 [2024-11-15 11:43:48.481247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.256 [2024-11-15 11:43:48.486825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.256 [2024-11-15 11:43:48.486856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.256 [2024-11-15 11:43:48.486880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.492365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.492411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.492428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.497857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.497887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.497905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.503148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.503180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.503197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.508981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.509026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.509043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.514952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.514983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.515001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.521094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.521125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.521143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.527243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.527275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.527293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.532807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.532840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.532857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.539024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.539061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.539080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.544565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.544595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.544613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.549424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.549454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.549472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.552133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.552162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.552179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.557318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.557363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.557380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.563331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.563362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.563378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.570651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.570698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.570715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.578737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.578768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.578800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.586570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.586600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.586617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.592445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.592475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.592492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.597143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.597172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.597203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.602724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.602756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.602773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.608581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.608610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.608626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.614141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.614170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.614187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.619311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.619341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.619359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.624372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.624403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.624419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.628743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.628772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.628788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.633212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.633247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.633265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.257 [2024-11-15 11:43:48.637581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.257 [2024-11-15 11:43:48.637624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.257 [2024-11-15 11:43:48.637640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.258 [2024-11-15 11:43:48.642150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.258 [2024-11-15 11:43:48.642194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.258 [2024-11-15 11:43:48.642209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.258 [2024-11-15 11:43:48.647783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.258 [2024-11-15 11:43:48.647813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.258 [2024-11-15 11:43:48.647830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.258 [2024-11-15 11:43:48.654758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.258 [2024-11-15 11:43:48.654788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.258 [2024-11-15 11:43:48.654804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.258 [2024-11-15 11:43:48.661555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.258 [2024-11-15 11:43:48.661585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.258 [2024-11-15 11:43:48.661602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.258 [2024-11-15 11:43:48.667011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.258 [2024-11-15 11:43:48.667056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.258 [2024-11-15 11:43:48.667074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.258 [2024-11-15 11:43:48.672565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.258 [2024-11-15 11:43:48.672595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.258 [2024-11-15 11:43:48.672613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.258 [2024-11-15 11:43:48.677025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.258 [2024-11-15 11:43:48.677052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.258 [2024-11-15 11:43:48.677084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.681528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.517 [2024-11-15 11:43:48.681557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.517 [2024-11-15 11:43:48.681574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.686597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.517 [2024-11-15 11:43:48.686627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.517 [2024-11-15 11:43:48.686658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.691390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.517 [2024-11-15 11:43:48.691420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.517 [2024-11-15 11:43:48.691437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.696197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.517 [2024-11-15 11:43:48.696227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.517 [2024-11-15 11:43:48.696260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.701658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.517 [2024-11-15 11:43:48.701689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.517 [2024-11-15 11:43:48.701706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.707378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.517 [2024-11-15 11:43:48.707406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.517 [2024-11-15 11:43:48.707422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.712287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.517 [2024-11-15 11:43:48.712326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.517 [2024-11-15 11:43:48.712345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.717499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.517 [2024-11-15 11:43:48.717544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.517 [2024-11-15 11:43:48.717561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.722207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.517 [2024-11-15 11:43:48.722236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.517 [2024-11-15 11:43:48.722272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.726632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.517 [2024-11-15 11:43:48.726662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.517 [2024-11-15 11:43:48.726679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.731240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.517 [2024-11-15 11:43:48.731271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.517 [2024-11-15 11:43:48.731288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.735817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.517 [2024-11-15 11:43:48.735862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.517 [2024-11-15 11:43:48.735880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.740300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.517 [2024-11-15 11:43:48.740336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.517 [2024-11-15 11:43:48.740353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.744983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.517 [2024-11-15 11:43:48.745012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.517 [2024-11-15 11:43:48.745044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.749725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.517 [2024-11-15 11:43:48.749755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.517 [2024-11-15 11:43:48.749772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.754438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.517 [2024-11-15 11:43:48.754467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.517 [2024-11-15 11:43:48.754484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.759001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.517 [2024-11-15 11:43:48.759031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.517 [2024-11-15 11:43:48.759048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.517 [2024-11-15 11:43:48.763483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.763531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.763548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.768044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.768073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.768089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.772521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.772550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.772566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.777014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.777044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.777060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.781597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.781626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.781644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.787095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.787126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.787144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.792842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.792874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.792892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.799294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.799349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.799367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.804768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.804813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.804830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.810022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.810053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.810070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.815207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.815238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.815255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.819821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.819864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.819882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.824429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.824460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.824477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.828971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.829000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.829017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.833532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.833562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.833578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.838055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.838085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.838102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.842620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.842649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.842666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.847110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.847139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.847161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.851795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.851838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.851854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.856403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.856432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.856448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.861112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.861141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.861158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.865820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.865849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.865866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.870772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.870802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.518 [2024-11-15 11:43:48.870820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.518 [2024-11-15 11:43:48.876246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.518 [2024-11-15 11:43:48.876290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-11-15 11:43:48.876328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.519 [2024-11-15 11:43:48.880897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.519 [2024-11-15 11:43:48.880942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-11-15 11:43:48.880959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.519 [2024-11-15 11:43:48.885615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.519 [2024-11-15 11:43:48.885646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-11-15 11:43:48.885663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.519 [2024-11-15 11:43:48.890685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.519 [2024-11-15 11:43:48.890715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-11-15 11:43:48.890732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.519 [2024-11-15 11:43:48.897163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.519 [2024-11-15 11:43:48.897193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-11-15 11:43:48.897226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.519 [2024-11-15 11:43:48.904569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.519 [2024-11-15 11:43:48.904600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-11-15 11:43:48.904641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.519 [2024-11-15 11:43:48.911431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.519 [2024-11-15 11:43:48.911463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-11-15 11:43:48.911481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.519 [2024-11-15 11:43:48.919768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.519 [2024-11-15 11:43:48.919799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-11-15 11:43:48.919831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.519 [2024-11-15 11:43:48.927444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.519 [2024-11-15 11:43:48.927476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-11-15 11:43:48.927494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.519 [2024-11-15 11:43:48.935240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.519 [2024-11-15 11:43:48.935274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.519 [2024-11-15 11:43:48.935292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.776 [2024-11-15 11:43:48.941635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.776 [2024-11-15 11:43:48.941682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.776 [2024-11-15 11:43:48.941700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.776 [2024-11-15 11:43:48.948606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.776 [2024-11-15 11:43:48.948638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.776 [2024-11-15 11:43:48.948662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.776 [2024-11-15 11:43:48.955562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.776 [2024-11-15 11:43:48.955594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.776 [2024-11-15 11:43:48.955612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.776 [2024-11-15 11:43:48.961764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.776 [2024-11-15 11:43:48.961810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.776 [2024-11-15 11:43:48.961827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.776 [2024-11-15 11:43:48.967908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.776 [2024-11-15 11:43:48.967939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.776 [2024-11-15 11:43:48.967957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.776 [2024-11-15 11:43:48.973973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.776 [2024-11-15 11:43:48.974004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:48.974022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:48.980995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:48.981026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:48.981044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:48.987499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:48.987530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:48.987562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:48.993479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:48.993511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:48.993529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:48.999334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:48.999390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:48.999407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.005414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.005466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.005484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.011331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.011362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.011380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.018092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.018124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.018142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.025848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.025880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.025899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.034189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.034221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.034239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.041371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.041403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.041422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.045824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.045855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.045886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.053339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.053369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.053401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.061011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.061043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.061075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.069159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.069189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.069206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.076763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.076795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.076827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.084290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.084329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.084362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.092090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.092120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.092136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.099795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.099825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.099843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.107420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.107449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.107465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.115035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.115079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.115096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.122633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.122678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.122695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.130496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.130542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.130565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.138074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.138104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.138122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.146097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.146141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.146157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.154017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.154046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.154077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.777 [2024-11-15 11:43:49.161674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.777 [2024-11-15 11:43:49.161705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.777 [2024-11-15 11:43:49.161736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.778 [2024-11-15 11:43:49.167396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.778 [2024-11-15 11:43:49.167428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.778 [2024-11-15 11:43:49.167446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.778 [2024-11-15 11:43:49.171563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.778 [2024-11-15 11:43:49.171593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.778 [2024-11-15 11:43:49.171610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.778 [2024-11-15 11:43:49.174595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.778 [2024-11-15 11:43:49.174625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.778 [2024-11-15 11:43:49.174642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.778 [2024-11-15 11:43:49.178130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.778 [2024-11-15 11:43:49.178159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.778 [2024-11-15 11:43:49.178176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.778 [2024-11-15 11:43:49.182132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.778 [2024-11-15 11:43:49.182181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.778 [2024-11-15 11:43:49.182198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:08.778 [2024-11-15 11:43:49.186620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.778 [2024-11-15 11:43:49.186665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.778 [2024-11-15 11:43:49.186682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:08.778 [2024-11-15 11:43:49.191357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.778 [2024-11-15 11:43:49.191387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.778 [2024-11-15 11:43:49.191404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:08.778 [2024-11-15 11:43:49.195777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.778 [2024-11-15 11:43:49.195806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.778 [2024-11-15 11:43:49.195823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:08.778 [2024-11-15 11:43:49.200194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:08.778 [2024-11-15 11:43:49.200223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.778 [2024-11-15 11:43:49.200239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.036 [2024-11-15 11:43:49.204579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.036 [2024-11-15 11:43:49.204609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-11-15 11:43:49.204625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.036 [2024-11-15 11:43:49.209645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.036 [2024-11-15 11:43:49.209675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-11-15 11:43:49.209692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.036 [2024-11-15 11:43:49.214710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.036 [2024-11-15 11:43:49.214739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-11-15 11:43:49.214756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.036 [2024-11-15 11:43:49.219165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.036 [2024-11-15 11:43:49.219195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-11-15 11:43:49.219211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.036 [2024-11-15 11:43:49.224178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.036 [2024-11-15 11:43:49.224208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-11-15 11:43:49.224225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.036 [2024-11-15 11:43:49.229877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.036 [2024-11-15 11:43:49.229908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-11-15 11:43:49.229941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.036 [2024-11-15 11:43:49.235982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.036 [2024-11-15 11:43:49.236014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-11-15 11:43:49.236032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.036 [2024-11-15 11:43:49.242546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.036 [2024-11-15 11:43:49.242578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-11-15 11:43:49.242596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.036 [2024-11-15 11:43:49.247930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.036 [2024-11-15 11:43:49.247961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-11-15 11:43:49.247993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.036 [2024-11-15 11:43:49.253354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.036 [2024-11-15 11:43:49.253385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-11-15 11:43:49.253418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.036 [2024-11-15 11:43:49.258656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.036 [2024-11-15 11:43:49.258702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-11-15 11:43:49.258718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.036 5572.00 IOPS, 696.50 MiB/s [2024-11-15T10:43:49.463Z] [2024-11-15 11:43:49.265167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.036 [2024-11-15 11:43:49.265196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-11-15 11:43:49.265212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.036 [2024-11-15 11:43:49.270121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.036 [2024-11-15 11:43:49.270158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-11-15 11:43:49.270175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.036 [2024-11-15 11:43:49.274998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.036 [2024-11-15 11:43:49.275028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-11-15 11:43:49.275046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.036 [2024-11-15 11:43:49.279998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.036 [2024-11-15 11:43:49.280029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-11-15 11:43:49.280046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.036 [2024-11-15 11:43:49.284480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.036 [2024-11-15 11:43:49.284509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-11-15 11:43:49.284526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.036 [2024-11-15 11:43:49.289047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.036 [2024-11-15 11:43:49.289079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.036 [2024-11-15 11:43:49.289096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.293527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.293556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.293574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.298043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.298073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.298089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.302536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.302565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.302582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.307114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.307144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.307161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.311619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.311650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.311667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.316132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.316161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.316177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.320659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.320688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.320705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.325183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.325212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.325230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.330401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.330432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.330449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.334723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.334753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.334770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.339770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.339801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.339819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.345859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.345890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.345907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.351785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.351816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.351840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.357621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.357653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.357685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.363350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.363380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.363397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.369041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.369072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.369089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.374842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.374873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.374890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.380654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.380685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.380703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.386470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.386500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.386518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.392826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.392858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.392875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.399153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.399185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.399204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.405892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.405928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.405946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.412191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.412222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.412240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.417619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.417650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.417668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.422718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.422749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.422767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.427447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.037 [2024-11-15 11:43:49.427477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.037 [2024-11-15 11:43:49.427494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.037 [2024-11-15 11:43:49.432722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.038 [2024-11-15 11:43:49.432753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.038 [2024-11-15 11:43:49.432769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.038 [2024-11-15 11:43:49.438150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.038 [2024-11-15 11:43:49.438181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.038 [2024-11-15 11:43:49.438200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.038 [2024-11-15 11:43:49.442986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.038 [2024-11-15 11:43:49.443016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.038 [2024-11-15 11:43:49.443033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.038 [2024-11-15 11:43:49.447692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.038 [2024-11-15 11:43:49.447721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.038 [2024-11-15 11:43:49.447738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.038 [2024-11-15 11:43:49.452320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.038 [2024-11-15 11:43:49.452349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.038 [2024-11-15 11:43:49.452366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.038 [2024-11-15 11:43:49.457243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.038 [2024-11-15 11:43:49.457273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.038 [2024-11-15 11:43:49.457290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.462090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.462121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.462138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.466880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.466910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.466927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.471524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.471553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.471570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.477482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.477513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.477531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.482844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.482875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.482893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.487423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.487453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.487471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.492296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.492333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.492357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.496216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.496246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.496264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.500477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.500522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.500538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.506613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.506644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.506661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.512238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.512269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.512286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.516804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.516836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.516853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.519820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.519850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.519868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.524484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.524514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.524532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.529441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.529473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.529490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.535247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.535278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.535296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.541333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.541365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.541383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.546319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.546350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.546367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.550840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.550870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.550886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.555458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.555488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.555506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.560087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.560117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.560134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.564665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.564696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.564713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.569249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.569279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.569296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.573851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.573880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.573902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.578431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.578462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.578479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.582903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.582931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.582947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.587449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.587478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.587495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.297 [2024-11-15 11:43:49.592007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.297 [2024-11-15 11:43:49.592051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.297 [2024-11-15 11:43:49.592067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.596338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.596368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.596384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.599321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.599350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.599366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.603872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.603901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.603918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.609022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.609053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.609071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.613508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.613546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.613564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.618270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.618299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.618325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.622794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.622822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.622838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.627499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.627529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.627546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.633218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.633248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.633279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.640663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.640709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.640726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.646905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.646935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.646967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.652575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.652605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.652621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.658182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.658225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.658241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.663847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.663891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.663908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.669184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.669229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.669246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.675564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.675594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.675611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.682529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.682560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.682578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.688078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.688109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.688126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.693585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.693614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.693631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.698421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.698449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.698481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.703441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.703471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.703488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.708484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.708516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.708540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.713569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.713600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.713617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.298 [2024-11-15 11:43:49.718154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.298 [2024-11-15 11:43:49.718184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.298 [2024-11-15 11:43:49.718201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.724847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.724879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.724896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.731077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.731109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.731127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.737335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.737369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.737385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.742488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.742520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.742538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.747768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.747801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.747818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.752596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.752625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.752642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.757195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.757232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.757249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.761741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.761771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.761788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.766285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.766338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.766356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.771988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.772019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.772036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.776345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.776372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.776403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.782493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.782523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.782554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.788034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.788064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.788081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.793593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.793625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.793657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.798797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.798829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.798861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.803782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.803810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.803825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.808353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.808382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.808399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.813035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.813063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.813079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.817526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.817555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.817572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.822691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.822720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.822754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.828398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.828441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.828457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.836043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.836072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.836088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.842032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.842063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.842080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.848316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.848350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.848384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.854588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.854636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.854653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.860347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.860386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.860419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.866558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.866589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.866622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.872945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.872975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.873005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.878852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.878882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.878899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.885141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.885186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.885202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.891243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.891273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.891290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.897489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.897519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.897535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.902854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.902886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.902904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.908644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.908674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.908690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.914513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.914544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.914561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.920209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.920238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.920255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.925747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.925790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.925806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.931160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.931192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.931210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.938004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.938035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.938069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.944269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.944322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.944341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.950335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.950366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.950390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.955809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.955840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.955857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.961435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.557 [2024-11-15 11:43:49.961466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.557 [2024-11-15 11:43:49.961484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.557 [2024-11-15 11:43:49.967366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.558 [2024-11-15 11:43:49.967398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.558 [2024-11-15 11:43:49.967416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.558 [2024-11-15 11:43:49.974328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.558 [2024-11-15 11:43:49.974361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.558 [2024-11-15 11:43:49.974378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.558 [2024-11-15 11:43:49.979590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.558 [2024-11-15 11:43:49.979620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.558 [2024-11-15 11:43:49.979638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.816 [2024-11-15 11:43:49.984677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.816 [2024-11-15 11:43:49.984708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.816 [2024-11-15 11:43:49.984726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.816 [2024-11-15 11:43:49.989457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.816 [2024-11-15 11:43:49.989488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.816 [2024-11-15 11:43:49.989505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.816 [2024-11-15 11:43:49.995323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.816 [2024-11-15 11:43:49.995352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.816 [2024-11-15 11:43:49.995383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.816 [2024-11-15 11:43:50.000565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.816 [2024-11-15 11:43:50.000602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.816 [2024-11-15 11:43:50.000621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.816 [2024-11-15 11:43:50.005694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.816 [2024-11-15 11:43:50.005729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.816 [2024-11-15 11:43:50.005747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.816 [2024-11-15 11:43:50.011673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.816 [2024-11-15 11:43:50.011706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.816 [2024-11-15 11:43:50.011724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.816 [2024-11-15 11:43:50.017361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.816 [2024-11-15 11:43:50.017398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.816 [2024-11-15 11:43:50.017418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.816 [2024-11-15 11:43:50.022838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.816 [2024-11-15 11:43:50.022872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.816 [2024-11-15 11:43:50.022889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.816 [2024-11-15 11:43:50.028319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.816 [2024-11-15 11:43:50.028353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.816 [2024-11-15 11:43:50.028371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.816 [2024-11-15 11:43:50.032880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.816 [2024-11-15 11:43:50.032912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.816 [2024-11-15 11:43:50.032930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.816 [2024-11-15 11:43:50.038136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.816 [2024-11-15 11:43:50.038168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.816 [2024-11-15 11:43:50.038186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.816 [2024-11-15 11:43:50.043526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.816 [2024-11-15 11:43:50.043557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.816 [2024-11-15 11:43:50.043575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.816 [2024-11-15 11:43:50.048689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.816 [2024-11-15 11:43:50.048720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.816 [2024-11-15 11:43:50.048737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.816 [2024-11-15 11:43:50.053688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.816 [2024-11-15 11:43:50.053718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.816 [2024-11-15 11:43:50.053736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.816 [2024-11-15 11:43:50.058884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.816 [2024-11-15 11:43:50.058913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.816 [2024-11-15 11:43:50.058930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.064263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.064298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.064324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.069387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.069418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.069435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.074335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.074365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.074382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.080056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.080086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.080104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.085041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.085082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.085099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.089824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.089854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.089880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.094858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.094901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.094918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.100749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.100781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.100799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.107967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.107999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.108016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.112430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.112461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.112478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.119152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.119181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.119197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.125277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.125325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.125343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.130404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.130436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.130453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.135584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.135619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.135636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.140510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.140541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.140558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.145474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.145519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.145536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.150455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.150484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.150501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.155510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.155539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.155571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.161829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.161860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.161877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.169280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.169323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.169342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.176119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.176151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.176169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.183997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.184028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.184045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.191921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.191952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.191975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.198937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.198968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.198985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.206556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.206588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.206606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.214445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.214476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.214494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.222381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.222413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.222430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.228645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.228688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.228705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.234402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.234434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.234452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:09.817 [2024-11-15 11:43:50.238145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:09.817 [2024-11-15 11:43:50.238176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.817 [2024-11-15 11:43:50.238193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.076 [2024-11-15 11:43:50.243884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:10.076 [2024-11-15 11:43:50.243916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.076 [2024-11-15 11:43:50.243933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.076 [2024-11-15 11:43:50.249977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:10.076 [2024-11-15 11:43:50.250014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.076 [2024-11-15 11:43:50.250032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:10.076 [2024-11-15 11:43:50.256041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:10.076 [2024-11-15 11:43:50.256087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.076 [2024-11-15 11:43:50.256109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:10.076 [2024-11-15 11:43:50.261905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:10.076 [2024-11-15 11:43:50.261937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.076 [2024-11-15 11:43:50.261954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.076 5652.50 IOPS, 706.56 MiB/s [2024-11-15T10:43:50.503Z] [2024-11-15 11:43:50.268761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24ad2e0) 00:25:10.076 [2024-11-15 11:43:50.268792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:10.076 [2024-11-15 11:43:50.268809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:10.076 00:25:10.076 Latency(us) 00:25:10.076 [2024-11-15T10:43:50.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.076 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:10.076 nvme0n1 : 2.00 5650.21 706.28 0.00 0.00 2826.96 725.14 10194.49 00:25:10.076 [2024-11-15T10:43:50.503Z] =================================================================================================================== 00:25:10.076 [2024-11-15T10:43:50.503Z] Total : 5650.21 706.28 0.00 0.00 2826.96 725.14 10194.49 00:25:10.076 { 00:25:10.076 "results": [ 00:25:10.076 { 00:25:10.076 "job": "nvme0n1", 00:25:10.076 "core_mask": "0x2", 00:25:10.076 "workload": "randread", 00:25:10.076 "status": "finished", 00:25:10.076 "queue_depth": 16, 00:25:10.076 "io_size": 131072, 00:25:10.076 "runtime": 2.003641, 00:25:10.076 "iops": 5650.2137858029455, 00:25:10.076 "mibps": 706.2767232253682, 00:25:10.076 "io_failed": 0, 00:25:10.076 "io_timeout": 0, 00:25:10.076 "avg_latency_us": 2826.957848900928, 00:25:10.076 "min_latency_us": 725.1437037037037, 00:25:10.076 "max_latency_us": 10194.488888888889 00:25:10.076 } 00:25:10.076 ], 00:25:10.076 "core_count": 1 00:25:10.076 } 00:25:10.076 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:10.076 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:10.076 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:10.076 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:10.076 | .driver_specific 00:25:10.076 | .nvme_error 00:25:10.076 | .status_code 00:25:10.076 | .command_transient_transport_error' 00:25:10.334 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 366 > 0 )) 00:25:10.334 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3032504 00:25:10.334 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3032504 ']' 00:25:10.334 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3032504 00:25:10.334 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:10.334 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.334 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3032504 00:25:10.334 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:10.334 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:10.334 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3032504' 00:25:10.334 killing process with pid 3032504 00:25:10.334 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3032504 00:25:10.334 Received shutdown signal, test time was about 2.000000 seconds 00:25:10.334 00:25:10.334 Latency(us) 00:25:10.334 [2024-11-15T10:43:50.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.334 [2024-11-15T10:43:50.761Z] =================================================================================================================== 00:25:10.334 [2024-11-15T10:43:50.761Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:10.334 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3032504 00:25:10.593 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:10.593 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:10.593 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:10.593 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:10.593 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:10.593 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3032915 00:25:10.593 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:10.593 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3032915 /var/tmp/bperf.sock 00:25:10.593 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3032915 ']' 00:25:10.593 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:10.593 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:10.593 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:10.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:10.593 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:10.593 11:43:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:10.593 [2024-11-15 11:43:50.872764] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:25:10.593 [2024-11-15 11:43:50.872854] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3032915 ] 00:25:10.593 [2024-11-15 11:43:50.937727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.593 [2024-11-15 11:43:50.992166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.852 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.852 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:10.852 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:10.852 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:11.109 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:11.109 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.109 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:11.109 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.109 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:11.109 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:11.673 nvme0n1 00:25:11.673 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:11.673 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.673 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:11.673 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.673 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:11.673 11:43:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:11.673 Running I/O for 2 seconds... 00:25:11.673 [2024-11-15 11:43:52.015732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166ebfd0 00:25:11.673 [2024-11-15 11:43:52.016850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.673 [2024-11-15 11:43:52.016901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:11.673 [2024-11-15 11:43:52.027206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166f8618 00:25:11.673 [2024-11-15 11:43:52.028222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.673 [2024-11-15 11:43:52.028250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:11.673 [2024-11-15 11:43:52.039385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166eb760 00:25:11.673 [2024-11-15 11:43:52.040396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.673 [2024-11-15 11:43:52.040425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:11.673 [2024-11-15 11:43:52.051826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166e6300 00:25:11.673 [2024-11-15 11:43:52.052888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.673 [2024-11-15 11:43:52.052932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:11.674 [2024-11-15 11:43:52.063052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc560 00:25:11.674 [2024-11-15 11:43:52.063950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.674 [2024-11-15 11:43:52.063992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:11.674 [2024-11-15 11:43:52.075196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166e5658 00:25:11.674 [2024-11-15 11:43:52.076397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.674 [2024-11-15 11:43:52.076425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:11.674 [2024-11-15 11:43:52.087274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166de8a8 00:25:11.674 [2024-11-15 11:43:52.088507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.674 [2024-11-15 11:43:52.088551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:11.931 [2024-11-15 11:43:52.098787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166f4f40 00:25:11.931 [2024-11-15 11:43:52.100083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.931 [2024-11-15 11:43:52.100127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:11.931 [2024-11-15 11:43:52.111240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166f8a50 00:25:11.931 [2024-11-15 11:43:52.112618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.931 [2024-11-15 11:43:52.112648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:11.931 [2024-11-15 11:43:52.123532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:11.931 [2024-11-15 11:43:52.124987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.931 [2024-11-15 11:43:52.125030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:11.931 [2024-11-15 11:43:52.134477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166f96f8 00:25:11.932 [2024-11-15 11:43:52.135610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.932 [2024-11-15 11:43:52.135639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:11.932 [2024-11-15 11:43:52.147049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166e6fa8 00:25:11.932 [2024-11-15 11:43:52.148316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.932 [2024-11-15 11:43:52.148359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:11.932 [2024-11-15 11:43:52.158143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166f92c0 00:25:11.932 [2024-11-15 11:43:52.159318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.932 [2024-11-15 11:43:52.159359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:11.932 [2024-11-15 11:43:52.170533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166e5220 00:25:11.932 [2024-11-15 11:43:52.171935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.932 [2024-11-15 11:43:52.171979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:11.932 [2024-11-15 11:43:52.183067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:11.932 [2024-11-15 11:43:52.183382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.932 [2024-11-15 11:43:52.183410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:11.932 [2024-11-15 11:43:52.196956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:11.932 [2024-11-15 11:43:52.197251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.932 [2024-11-15 11:43:52.197280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:11.932 [2024-11-15 11:43:52.210711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:11.932 [2024-11-15 11:43:52.210987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.932 [2024-11-15 11:43:52.211030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:11.932 [2024-11-15 11:43:52.224603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:11.932 [2024-11-15 11:43:52.224872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.932 [2024-11-15 11:43:52.224916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:11.932 [2024-11-15 11:43:52.238584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:11.932 [2024-11-15 11:43:52.238804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.932 [2024-11-15 11:43:52.238831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:11.932 [2024-11-15 11:43:52.252609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:11.932 [2024-11-15 11:43:52.252843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.932 [2024-11-15 11:43:52.252886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:11.932 [2024-11-15 11:43:52.266672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:11.932 [2024-11-15 11:43:52.266899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.932 [2024-11-15 11:43:52.266943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:11.932 [2024-11-15 11:43:52.280452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:11.932 [2024-11-15 11:43:52.280747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.932 [2024-11-15 11:43:52.280782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:11.932 [2024-11-15 11:43:52.294288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:11.932 [2024-11-15 11:43:52.294495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.932 [2024-11-15 11:43:52.294524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:11.932 [2024-11-15 11:43:52.307860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:11.932 [2024-11-15 11:43:52.308124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.932 [2024-11-15 11:43:52.308163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:11.932 [2024-11-15 11:43:52.321477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:11.932 [2024-11-15 11:43:52.321792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.932 [2024-11-15 11:43:52.321834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:11.932 [2024-11-15 11:43:52.334561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:11.932 [2024-11-15 11:43:52.334834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.932 [2024-11-15 11:43:52.334862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:11.932 [2024-11-15 11:43:52.347933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:11.932 [2024-11-15 11:43:52.348212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.932 [2024-11-15 11:43:52.348240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.190 [2024-11-15 11:43:52.361412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.190 [2024-11-15 11:43:52.361644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.190 [2024-11-15 11:43:52.361686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.190 [2024-11-15 11:43:52.374936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.190 [2024-11-15 11:43:52.375176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.190 [2024-11-15 11:43:52.375204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.190 [2024-11-15 11:43:52.388273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.190 [2024-11-15 11:43:52.388515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.190 [2024-11-15 11:43:52.388543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.190 [2024-11-15 11:43:52.401672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.190 [2024-11-15 11:43:52.401951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.190 [2024-11-15 11:43:52.401978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.190 [2024-11-15 11:43:52.415188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.190 [2024-11-15 11:43:52.415421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.190 [2024-11-15 11:43:52.415448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.190 [2024-11-15 11:43:52.429031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.190 [2024-11-15 11:43:52.429299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.190 [2024-11-15 11:43:52.429334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.190 [2024-11-15 11:43:52.442847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.191 [2024-11-15 11:43:52.443130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.191 [2024-11-15 11:43:52.443172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.191 [2024-11-15 11:43:52.456727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.191 [2024-11-15 11:43:52.456959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.191 [2024-11-15 11:43:52.457001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.191 [2024-11-15 11:43:52.470535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.191 [2024-11-15 11:43:52.470782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.191 [2024-11-15 11:43:52.470825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.191 [2024-11-15 11:43:52.484598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.191 [2024-11-15 11:43:52.484883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.191 [2024-11-15 11:43:52.484910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.191 [2024-11-15 11:43:52.498586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.191 [2024-11-15 11:43:52.498842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.191 [2024-11-15 11:43:52.498884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.191 [2024-11-15 11:43:52.512376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.191 [2024-11-15 11:43:52.512720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.191 [2024-11-15 11:43:52.512763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.191 [2024-11-15 11:43:52.526358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.191 [2024-11-15 11:43:52.526712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.191 [2024-11-15 11:43:52.526754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.191 [2024-11-15 11:43:52.540110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.191 [2024-11-15 11:43:52.540382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.191 [2024-11-15 11:43:52.540410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.191 [2024-11-15 11:43:52.553965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.191 [2024-11-15 11:43:52.554202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.191 [2024-11-15 11:43:52.554245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.191 [2024-11-15 11:43:52.567672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.191 [2024-11-15 11:43:52.567918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.191 [2024-11-15 11:43:52.567961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.191 [2024-11-15 11:43:52.581537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.191 [2024-11-15 11:43:52.581830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.191 [2024-11-15 11:43:52.581858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.191 [2024-11-15 11:43:52.595521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.191 [2024-11-15 11:43:52.595753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.191 [2024-11-15 11:43:52.595794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.191 [2024-11-15 11:43:52.609471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.191 [2024-11-15 11:43:52.609719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.191 [2024-11-15 11:43:52.609761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.448 [2024-11-15 11:43:52.622936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.448 [2024-11-15 11:43:52.623172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.448 [2024-11-15 11:43:52.623213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.448 [2024-11-15 11:43:52.636896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.448 [2024-11-15 11:43:52.637127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.448 [2024-11-15 11:43:52.637159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.448 [2024-11-15 11:43:52.650800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.448 [2024-11-15 11:43:52.651067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.449 [2024-11-15 11:43:52.651110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.449 [2024-11-15 11:43:52.664816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.449 [2024-11-15 11:43:52.665045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.449 [2024-11-15 11:43:52.665072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.449 [2024-11-15 11:43:52.678914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.449 [2024-11-15 11:43:52.679131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.449 [2024-11-15 11:43:52.679173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.449 [2024-11-15 11:43:52.692931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.449 [2024-11-15 11:43:52.693147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.449 [2024-11-15 11:43:52.693174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.449 [2024-11-15 11:43:52.706883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.449 [2024-11-15 11:43:52.707174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.449 [2024-11-15 11:43:52.707216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.449 [2024-11-15 11:43:52.720961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.449 [2024-11-15 11:43:52.721187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.449 [2024-11-15 11:43:52.721213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.449 [2024-11-15 11:43:52.735073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.449 [2024-11-15 11:43:52.735350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.449 [2024-11-15 11:43:52.735393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.449 [2024-11-15 11:43:52.749106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.449 [2024-11-15 11:43:52.749325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.449 [2024-11-15 11:43:52.749353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.449 [2024-11-15 11:43:52.763036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.449 [2024-11-15 11:43:52.763265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.449 [2024-11-15 11:43:52.763319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.449 [2024-11-15 11:43:52.777170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.449 [2024-11-15 11:43:52.777446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.449 [2024-11-15 11:43:52.777475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.449 [2024-11-15 11:43:52.791096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.449 [2024-11-15 11:43:52.791384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.449 [2024-11-15 11:43:52.791411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.449 [2024-11-15 11:43:52.805089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.449 [2024-11-15 11:43:52.805402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.449 [2024-11-15 11:43:52.805430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.449 [2024-11-15 11:43:52.819033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.449 [2024-11-15 11:43:52.819321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.449 [2024-11-15 11:43:52.819363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.449 [2024-11-15 11:43:52.833003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.449 [2024-11-15 11:43:52.833284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.449 [2024-11-15 11:43:52.833334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.449 [2024-11-15 11:43:52.846928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.449 [2024-11-15 11:43:52.847214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.449 [2024-11-15 11:43:52.847256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.449 [2024-11-15 11:43:52.860811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.449 [2024-11-15 11:43:52.861037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.449 [2024-11-15 11:43:52.861080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:52.874509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:52.874800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:52.874842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:52.888118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:52.888411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:52.888439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:52.901868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:52.902153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:52.902196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:52.915839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:52.916061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:52.916088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:52.929757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:52.930044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:52.930085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:52.943728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:52.944032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:52.944060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:52.957813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:52.958038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:52.958079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:52.971878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:52.972124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:52.972166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:52.985792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:52.986033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:52.986073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:52.999685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:53.000001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:53.000028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 18884.00 IOPS, 73.77 MiB/s [2024-11-15T10:43:53.134Z] [2024-11-15 11:43:53.013686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:53.013948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:53.013990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:53.027641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:53.027920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:53.027961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:53.041422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:53.041748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:53.041776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:53.055293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:53.055521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:53.055549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:53.069242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:53.069549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:53.069577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:53.083187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:53.083415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:53.083443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:53.097207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:53.097461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:53.097488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:53.111257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:53.111466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:53.111509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.707 [2024-11-15 11:43:53.125334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.707 [2024-11-15 11:43:53.125611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.707 [2024-11-15 11:43:53.125639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.138824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.139050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.139076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.152782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.153103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.153144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.166712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.166990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.167032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.180620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.180915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.180957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.194563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.194808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.194850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.208588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.208815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.208857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.222564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.222869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.222911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.236488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.236739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.236782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.250563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.250833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.250875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.264548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.264802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.264844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.278684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.278984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.279026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.292613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.292977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.293005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.306454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.306679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.306721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.320262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.320491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.320519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.334089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.334288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.334340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.347954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.348182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.348209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.361926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.362230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.362257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.375940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.376248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.376291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:12.967 [2024-11-15 11:43:53.389599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:12.967 [2024-11-15 11:43:53.389843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.967 [2024-11-15 11:43:53.389871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.260 [2024-11-15 11:43:53.402275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.260 [2024-11-15 11:43:53.402510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.260 [2024-11-15 11:43:53.402552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.260 [2024-11-15 11:43:53.415599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.260 [2024-11-15 11:43:53.415847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.260 [2024-11-15 11:43:53.415879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.260 [2024-11-15 11:43:53.429239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.260 [2024-11-15 11:43:53.429466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.260 [2024-11-15 11:43:53.429496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.260 [2024-11-15 11:43:53.443046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.260 [2024-11-15 11:43:53.443324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.260 [2024-11-15 11:43:53.443364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.260 [2024-11-15 11:43:53.456798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.260 [2024-11-15 11:43:53.457058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.260 [2024-11-15 11:43:53.457087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.260 [2024-11-15 11:43:53.470270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.260 [2024-11-15 11:43:53.470506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.260 [2024-11-15 11:43:53.470534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.260 [2024-11-15 11:43:53.483883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.260 [2024-11-15 11:43:53.484180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.260 [2024-11-15 11:43:53.484217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.260 [2024-11-15 11:43:53.497366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.260 [2024-11-15 11:43:53.497551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.260 [2024-11-15 11:43:53.497579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.260 [2024-11-15 11:43:53.510928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.260 [2024-11-15 11:43:53.511179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.260 [2024-11-15 11:43:53.511208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.260 [2024-11-15 11:43:53.524575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.260 [2024-11-15 11:43:53.524837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.260 [2024-11-15 11:43:53.524866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.260 [2024-11-15 11:43:53.538293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.260 [2024-11-15 11:43:53.538517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.260 [2024-11-15 11:43:53.538545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.260 [2024-11-15 11:43:53.551702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.260 [2024-11-15 11:43:53.551956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.260 [2024-11-15 11:43:53.551984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.260 [2024-11-15 11:43:53.565286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.260 [2024-11-15 11:43:53.565511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.260 [2024-11-15 11:43:53.565539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.260 [2024-11-15 11:43:53.578988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.260 [2024-11-15 11:43:53.579273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.260 [2024-11-15 11:43:53.579309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.261 [2024-11-15 11:43:53.592569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.261 [2024-11-15 11:43:53.592843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.261 [2024-11-15 11:43:53.592871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.261 [2024-11-15 11:43:53.606212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.261 [2024-11-15 11:43:53.606456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.261 [2024-11-15 11:43:53.606484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.261 [2024-11-15 11:43:53.619832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.261 [2024-11-15 11:43:53.620108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.261 [2024-11-15 11:43:53.620136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.261 [2024-11-15 11:43:53.633520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.261 [2024-11-15 11:43:53.633783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.261 [2024-11-15 11:43:53.633811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.261 [2024-11-15 11:43:53.647104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.261 [2024-11-15 11:43:53.647381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.261 [2024-11-15 11:43:53.647409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.261 [2024-11-15 11:43:53.660597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.261 [2024-11-15 11:43:53.660809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.261 [2024-11-15 11:43:53.660836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.537 [2024-11-15 11:43:53.673475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.537 [2024-11-15 11:43:53.673757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.537 [2024-11-15 11:43:53.673784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.537 [2024-11-15 11:43:53.686985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.537 [2024-11-15 11:43:53.687237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.537 [2024-11-15 11:43:53.687265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.537 [2024-11-15 11:43:53.699747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.537 [2024-11-15 11:43:53.700002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.537 [2024-11-15 11:43:53.700030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.537 [2024-11-15 11:43:53.713272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.537 [2024-11-15 11:43:53.713502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.537 [2024-11-15 11:43:53.713529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.537 [2024-11-15 11:43:53.726854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.537 [2024-11-15 11:43:53.727109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.537 [2024-11-15 11:43:53.727152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.537 [2024-11-15 11:43:53.740374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.537 [2024-11-15 11:43:53.740619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.537 [2024-11-15 11:43:53.740646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.537 [2024-11-15 11:43:53.754103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.538 [2024-11-15 11:43:53.754356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.538 [2024-11-15 11:43:53.754383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.538 [2024-11-15 11:43:53.767752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.538 [2024-11-15 11:43:53.768025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.538 [2024-11-15 11:43:53.768053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.538 [2024-11-15 11:43:53.781392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.538 [2024-11-15 11:43:53.781649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.538 [2024-11-15 11:43:53.781677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.538 [2024-11-15 11:43:53.795060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.538 [2024-11-15 11:43:53.795313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.538 [2024-11-15 11:43:53.795341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.538 [2024-11-15 11:43:53.808701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.538 [2024-11-15 11:43:53.808962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.538 [2024-11-15 11:43:53.808989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.538 [2024-11-15 11:43:53.822345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.538 [2024-11-15 11:43:53.822573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.538 [2024-11-15 11:43:53.822601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.538 [2024-11-15 11:43:53.835884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.538 [2024-11-15 11:43:53.836154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.538 [2024-11-15 11:43:53.836188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.538 [2024-11-15 11:43:53.849459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.538 [2024-11-15 11:43:53.849737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.538 [2024-11-15 11:43:53.849765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.538 [2024-11-15 11:43:53.862931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.538 [2024-11-15 11:43:53.863194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.538 [2024-11-15 11:43:53.863222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.538 [2024-11-15 11:43:53.876520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.538 [2024-11-15 11:43:53.876789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.538 [2024-11-15 11:43:53.876817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.538 [2024-11-15 11:43:53.890162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.538 [2024-11-15 11:43:53.890386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.538 [2024-11-15 11:43:53.890414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.538 [2024-11-15 11:43:53.903721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.538 [2024-11-15 11:43:53.903962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.538 [2024-11-15 11:43:53.903989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.538 [2024-11-15 11:43:53.917320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.538 [2024-11-15 11:43:53.917534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.538 [2024-11-15 11:43:53.917561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.538 [2024-11-15 11:43:53.930868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.538 [2024-11-15 11:43:53.931077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.538 [2024-11-15 11:43:53.931104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.538 [2024-11-15 11:43:53.944093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.538 [2024-11-15 11:43:53.944316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.538 [2024-11-15 11:43:53.944344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.797 [2024-11-15 11:43:53.957056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.797 [2024-11-15 11:43:53.957327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.797 [2024-11-15 11:43:53.957356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.797 [2024-11-15 11:43:53.970455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.797 [2024-11-15 11:43:53.970722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.797 [2024-11-15 11:43:53.970750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.797 [2024-11-15 11:43:53.984022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.797 [2024-11-15 11:43:53.984293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.797 [2024-11-15 11:43:53.984328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.797 [2024-11-15 11:43:53.997746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.797 [2024-11-15 11:43:53.998026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.797 [2024-11-15 11:43:53.998054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.797 18793.50 IOPS, 73.41 MiB/s [2024-11-15T10:43:54.224Z] [2024-11-15 11:43:54.011310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da210) with pdu=0x2000166fc128 00:25:13.797 [2024-11-15 11:43:54.011546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:13.797 [2024-11-15 11:43:54.011575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:13.797 00:25:13.797 Latency(us) 00:25:13.797 [2024-11-15T10:43:54.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.797 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:13.797 nvme0n1 : 2.01 18794.53 73.42 0.00 0.00 6794.69 2718.53 14563.56 00:25:13.797 [2024-11-15T10:43:54.224Z] =================================================================================================================== 00:25:13.797 [2024-11-15T10:43:54.224Z] Total : 18794.53 73.42 0.00 0.00 6794.69 2718.53 14563.56 00:25:13.797 { 00:25:13.797 "results": [ 00:25:13.797 { 00:25:13.797 "job": "nvme0n1", 00:25:13.797 "core_mask": "0x2", 00:25:13.797 "workload": "randwrite", 00:25:13.797 "status": "finished", 00:25:13.797 "queue_depth": 128, 00:25:13.797 "io_size": 4096, 00:25:13.797 "runtime": 2.008403, 00:25:13.797 "iops": 18794.534762196632, 00:25:13.797 "mibps": 73.4161514148306, 00:25:13.797 "io_failed": 0, 00:25:13.797 "io_timeout": 0, 00:25:13.797 "avg_latency_us": 6794.688440641346, 00:25:13.797 "min_latency_us": 2718.5303703703703, 00:25:13.797 "max_latency_us": 14563.555555555555 00:25:13.797 } 00:25:13.797 ], 00:25:13.797 "core_count": 1 00:25:13.797 } 00:25:13.797 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:13.797 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:13.797 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:13.797 | .driver_specific 00:25:13.797 | .nvme_error 00:25:13.797 | .status_code 00:25:13.797 | .command_transient_transport_error' 00:25:13.797 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:14.055 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 148 > 0 )) 00:25:14.055 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3032915 00:25:14.055 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3032915 ']' 00:25:14.055 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3032915 00:25:14.055 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:14.055 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:14.055 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3032915 00:25:14.055 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:14.055 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:14.055 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3032915' 00:25:14.055 killing process with pid 3032915 00:25:14.055 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3032915 00:25:14.055 Received shutdown signal, test time was about 2.000000 seconds 00:25:14.055 00:25:14.055 Latency(us) 00:25:14.055 [2024-11-15T10:43:54.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.055 [2024-11-15T10:43:54.482Z] =================================================================================================================== 00:25:14.055 [2024-11-15T10:43:54.482Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:14.055 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3032915 00:25:14.312 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:14.312 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:14.312 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:14.312 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:14.312 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:14.312 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3033334 00:25:14.312 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:14.313 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3033334 /var/tmp/bperf.sock 00:25:14.313 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3033334 ']' 00:25:14.313 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:14.313 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.313 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:14.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:14.313 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.313 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:14.313 [2024-11-15 11:43:54.610444] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:25:14.313 [2024-11-15 11:43:54.610517] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3033334 ] 00:25:14.313 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:14.313 Zero copy mechanism will not be used. 00:25:14.313 [2024-11-15 11:43:54.679968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.571 [2024-11-15 11:43:54.738453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.571 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.571 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:14.571 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:14.571 11:43:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:14.829 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:14.829 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.829 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:14.829 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.829 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:14.829 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:15.396 nvme0n1 00:25:15.396 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:15.396 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.396 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:15.396 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.396 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:15.396 11:43:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:15.396 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:15.396 Zero copy mechanism will not be used. 00:25:15.396 Running I/O for 2 seconds... 00:25:15.396 [2024-11-15 11:43:55.709895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.709994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.710030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.396 [2024-11-15 11:43:55.715672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.715875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.715907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.396 [2024-11-15 11:43:55.722125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.722358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.722396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.396 [2024-11-15 11:43:55.728518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.728707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.728737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.396 [2024-11-15 11:43:55.734972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.735173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.735202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.396 [2024-11-15 11:43:55.741412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.741500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.741529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.396 [2024-11-15 11:43:55.747121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.747345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.747374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.396 [2024-11-15 11:43:55.753584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.753736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.753765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.396 [2024-11-15 11:43:55.759199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.759347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.759376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.396 [2024-11-15 11:43:55.765670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.765867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.765896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.396 [2024-11-15 11:43:55.772031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.772235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.772264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.396 [2024-11-15 11:43:55.778450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.778592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.778632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.396 [2024-11-15 11:43:55.784845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.785012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.785040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.396 [2024-11-15 11:43:55.791345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.791476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.791505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.396 [2024-11-15 11:43:55.797839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.798030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.798059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.396 [2024-11-15 11:43:55.804232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.804418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.804447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.396 [2024-11-15 11:43:55.810569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.810765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.810794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.396 [2024-11-15 11:43:55.817225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.396 [2024-11-15 11:43:55.817325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.396 [2024-11-15 11:43:55.817353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.824098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.824215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.824244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.829367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.829484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.829512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.834456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.834537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.834566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.839247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.839342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.839371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.844022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.844114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.844143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.848931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.849005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.849032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.854430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.854504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.854530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.859664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.859732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.859758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.865278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.865355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.865382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.870710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.870804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.870832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.875993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.876082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.876114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.881381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.881480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.881507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.887348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.887426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.887454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.892230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.892345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.892375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.898185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.898346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.898374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.903621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.903777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.903806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.908433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.908581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.908609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.913178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.913263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.913290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.918252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.656 [2024-11-15 11:43:55.918375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.656 [2024-11-15 11:43:55.918405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.656 [2024-11-15 11:43:55.923000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:55.923118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:55.923147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:55.927748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:55.927963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:55.927991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:55.933787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:55.933946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:55.933975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:55.938952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:55.939037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:55.939065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:55.943565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:55.943715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:55.943743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:55.948288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:55.948397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:55.948425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:55.953022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:55.953160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:55.953188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:55.957691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:55.957787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:55.957816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:55.962495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:55.962583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:55.962610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:55.967240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:55.967349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:55.967378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:55.971832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:55.971941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:55.971970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:55.976680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:55.976780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:55.976808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:55.982510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:55.982595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:55.982623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:55.987451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:55.987530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:55.987558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:55.992076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:55.992165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:55.992208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:55.996820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:55.996901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:55.996929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:56.001423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:56.001508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:56.001536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:56.006126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:56.006197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:56.006230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:56.011234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:56.011323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:56.011352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:56.016567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:56.016679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:56.016708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:56.021854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:56.021929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:56.021960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:56.027339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:56.027418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:56.027445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:56.032862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:56.033058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:56.033087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:56.039316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:56.039498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:56.039526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:56.045884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:56.046028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:56.046056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:56.051909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:56.052063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:56.052091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:56.057909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:56.058050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.657 [2024-11-15 11:43:56.058079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.657 [2024-11-15 11:43:56.063299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.657 [2024-11-15 11:43:56.063453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.658 [2024-11-15 11:43:56.063481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.658 [2024-11-15 11:43:56.069240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.658 [2024-11-15 11:43:56.069397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.658 [2024-11-15 11:43:56.069426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.658 [2024-11-15 11:43:56.075338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.658 [2024-11-15 11:43:56.075446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.658 [2024-11-15 11:43:56.075475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.916 [2024-11-15 11:43:56.081429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.916 [2024-11-15 11:43:56.081552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.916 [2024-11-15 11:43:56.081580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.086365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.086461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.086490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.091212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.091345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.091374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.096826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.096902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.096928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.102325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.102411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.102437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.108418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.108583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.108611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.114579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.114679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.114708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.119664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.119760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.119790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.124384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.124527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.124556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.129212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.129321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.129349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.134190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.134309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.134339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.138968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.139077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.139106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.144081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.144253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.144281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.151290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.151472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.151505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.156634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.156777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.156806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.161783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.161890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.161919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.167037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.167143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.167171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.172066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.172238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.172268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.177694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.177768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.177795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.183060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.183127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.183154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.188448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.188519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.188546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.917 [2024-11-15 11:43:56.193501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.917 [2024-11-15 11:43:56.193609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.917 [2024-11-15 11:43:56.193637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.198819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.198894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.198921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.204176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.204252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.204281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.209549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.209651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.209679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.214631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.214739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.214767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.219746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.219817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.219844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.224929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.224999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.225026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.230001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.230070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.230098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.235296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.235438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.235466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.240849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.240916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.240943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.246464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.246549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.246575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.252123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.252278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.252315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.257668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.257775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.257804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.264268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.264446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.264475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.269895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.269967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.269994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.275974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.276066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.276093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.281918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.282025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.282053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.287973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.288048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.288075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.293788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.293907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.293940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.298767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.298852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.298880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.303832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.304025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.304054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.310081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.310256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.310284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.315326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.918 [2024-11-15 11:43:56.315458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.918 [2024-11-15 11:43:56.315486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.918 [2024-11-15 11:43:56.320013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.919 [2024-11-15 11:43:56.320112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.919 [2024-11-15 11:43:56.320139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.919 [2024-11-15 11:43:56.324642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.919 [2024-11-15 11:43:56.324785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.919 [2024-11-15 11:43:56.324813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.919 [2024-11-15 11:43:56.329218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.919 [2024-11-15 11:43:56.329323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.919 [2024-11-15 11:43:56.329352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.919 [2024-11-15 11:43:56.333824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.919 [2024-11-15 11:43:56.333936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.919 [2024-11-15 11:43:56.333964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.919 [2024-11-15 11:43:56.338501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:15.919 [2024-11-15 11:43:56.338598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.919 [2024-11-15 11:43:56.338631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.178 [2024-11-15 11:43:56.343148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.178 [2024-11-15 11:43:56.343245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.343273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.347898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.348042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.348071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.352618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.352730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.352759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.357364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.357454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.357482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.362118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.362290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.362326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.368334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.368494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.368522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.373671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.373843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.373874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.380484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.380667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.380696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.387144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.387253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.387282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.392753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.392837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.392865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.398489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.398656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.398684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.403827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.403897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.403924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.409222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.409295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.409330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.414789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.414883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.414915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.420262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.420363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.420390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.425747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.425817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.425843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.430943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.431015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.431048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.436420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.436496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.436527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.441810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.441880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.441907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.447301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.447405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.447432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.452697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.452779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.452808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.457562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.457653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.457681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.462656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.462732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.462763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.467644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.467742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.467769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.472403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.472489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.472518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.477245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.477323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.179 [2024-11-15 11:43:56.477356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.179 [2024-11-15 11:43:56.482040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.179 [2024-11-15 11:43:56.482123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.482150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.486846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.486918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.486945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.491644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.491721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.491747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.496534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.496610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.496641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.501339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.501420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.501448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.506219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.506294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.506327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.511034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.511108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.511134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.515789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.515902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.515930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.521203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.521385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.521414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.527297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.527464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.527492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.533879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.534065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.534093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.540122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.540271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.540299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.546387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.546560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.546589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.552442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.552587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.552616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.558625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.558795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.558823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.564769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.564948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.564976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.570773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.570879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.570915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.576848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.577001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.577029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.583032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.583182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.583210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.589439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.589611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.589639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.595440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.595583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.595611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.180 [2024-11-15 11:43:56.601457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.180 [2024-11-15 11:43:56.601605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.180 [2024-11-15 11:43:56.601633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.440 [2024-11-15 11:43:56.607627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.440 [2024-11-15 11:43:56.607818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.440 [2024-11-15 11:43:56.607847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.440 [2024-11-15 11:43:56.613588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.440 [2024-11-15 11:43:56.613755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.440 [2024-11-15 11:43:56.613785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.440 [2024-11-15 11:43:56.619664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.440 [2024-11-15 11:43:56.619807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.440 [2024-11-15 11:43:56.619836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.440 [2024-11-15 11:43:56.625889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.440 [2024-11-15 11:43:56.626044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.440 [2024-11-15 11:43:56.626077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.440 [2024-11-15 11:43:56.631886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.440 [2024-11-15 11:43:56.632062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.440 [2024-11-15 11:43:56.632091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.440 [2024-11-15 11:43:56.638118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.440 [2024-11-15 11:43:56.638282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.440 [2024-11-15 11:43:56.638317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.440 [2024-11-15 11:43:56.644896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.440 [2024-11-15 11:43:56.645006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.440 [2024-11-15 11:43:56.645034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.440 [2024-11-15 11:43:56.650746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.440 [2024-11-15 11:43:56.650884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.440 [2024-11-15 11:43:56.650912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.440 [2024-11-15 11:43:56.655582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.440 [2024-11-15 11:43:56.655677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.440 [2024-11-15 11:43:56.655706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.440 [2024-11-15 11:43:56.660396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.440 [2024-11-15 11:43:56.660543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.440 [2024-11-15 11:43:56.660571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.440 [2024-11-15 11:43:56.665377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.440 [2024-11-15 11:43:56.665536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.440 [2024-11-15 11:43:56.665564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.440 [2024-11-15 11:43:56.670590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.440 [2024-11-15 11:43:56.670791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.440 [2024-11-15 11:43:56.670819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.440 [2024-11-15 11:43:56.676643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.440 [2024-11-15 11:43:56.676747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.440 [2024-11-15 11:43:56.676775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.440 [2024-11-15 11:43:56.682108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.440 [2024-11-15 11:43:56.682222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.440 [2024-11-15 11:43:56.682250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.440 [2024-11-15 11:43:56.688202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.440 [2024-11-15 11:43:56.688277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.440 [2024-11-15 11:43:56.688317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.440 [2024-11-15 11:43:56.693498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.440 [2024-11-15 11:43:56.693630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.440 [2024-11-15 11:43:56.693658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.440 [2024-11-15 11:43:56.698789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.440 [2024-11-15 11:43:56.698914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.698942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.703698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.703791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.703819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.441 5646.00 IOPS, 705.75 MiB/s [2024-11-15T10:43:56.868Z] [2024-11-15 11:43:56.710567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.710760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.710788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.715867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.715952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.715980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.720482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.720580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.720614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.725189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.725320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.725349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.729776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.729916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.729944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.734394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.734482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.734512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.738977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.739080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.739107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.744828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.745015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.745044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.750093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.750184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.750212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.754647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.754755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.754783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.759528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.759659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.759687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.764518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.764659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.764687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.770566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.770708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.770736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.776547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.776628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.776656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.783074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.783226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.783254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.789460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.789635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.789663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.796275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.796451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.796480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.803115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.803214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.803245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.809699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.809905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.809933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.816561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.816631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.816659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.822709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.822782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.822809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.827823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.827898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.827925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.832896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.832971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.832998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.838823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.838913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.838940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.844064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.844141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.441 [2024-11-15 11:43:56.844168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.441 [2024-11-15 11:43:56.849422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.441 [2024-11-15 11:43:56.849492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.442 [2024-11-15 11:43:56.849519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.442 [2024-11-15 11:43:56.854614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.442 [2024-11-15 11:43:56.854701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.442 [2024-11-15 11:43:56.854728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.442 [2024-11-15 11:43:56.860341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.442 [2024-11-15 11:43:56.860428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.442 [2024-11-15 11:43:56.860457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.701 [2024-11-15 11:43:56.865333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.701 [2024-11-15 11:43:56.865411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.701 [2024-11-15 11:43:56.865445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.701 [2024-11-15 11:43:56.869759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.701 [2024-11-15 11:43:56.869843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.701 [2024-11-15 11:43:56.869872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.701 [2024-11-15 11:43:56.874117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.701 [2024-11-15 11:43:56.874207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.701 [2024-11-15 11:43:56.874236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.701 [2024-11-15 11:43:56.878404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.701 [2024-11-15 11:43:56.878490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.701 [2024-11-15 11:43:56.878518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.701 [2024-11-15 11:43:56.882595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.882702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.882731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.887251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.887367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.887397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.891870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.891984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.892012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.896213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.896300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.896336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.900408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.900525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.900554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.904606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.904726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.904756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.908935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.909037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.909065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.913230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.913340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.913368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.917443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.917526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.917555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.921672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.921783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.921811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.925854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.925953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.925982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.930047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.930157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.930185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.934297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.934436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.934464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.938609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.938706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.938734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.942905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.942991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.943019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.947162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.947278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.947314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.951449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.951549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.951577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.955641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.955724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.955752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.959848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.959951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.959979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.964108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.964198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.964224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.968372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.968475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.968504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.972632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.972730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.972757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.976826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.976931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.976965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.980970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.981086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.981115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.985223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.985346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.985376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.990154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.990376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.702 [2024-11-15 11:43:56.990405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.702 [2024-11-15 11:43:56.995359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.702 [2024-11-15 11:43:56.995579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:56.995608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.001020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.001159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.001187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.006510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.006619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.006647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.011584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.011751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.011779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.016680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.016955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.016983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.021660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.021845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.021873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.026743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.026929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.026956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.031740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.031879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.031906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.036853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.036961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.036989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.042015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.042163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.042191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.047126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.047250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.047278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.052385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.052650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.052678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.057559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.057697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.057725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.062680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.062895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.062923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.067830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.067980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.068009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.072952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.073189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.073218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.077844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.077960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.077988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.082421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.082551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.082579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.087777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.087986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.088014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.093460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.093613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.093641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.099360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.099537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.099565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.105373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.105508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.105536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.111429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.111658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.111691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.117522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.117735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.117763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.703 [2024-11-15 11:43:57.123919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.703 [2024-11-15 11:43:57.124142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.703 [2024-11-15 11:43:57.124171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.964 [2024-11-15 11:43:57.129660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.964 [2024-11-15 11:43:57.129883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.964 [2024-11-15 11:43:57.129910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.964 [2024-11-15 11:43:57.135631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.964 [2024-11-15 11:43:57.135840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.964 [2024-11-15 11:43:57.135868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.964 [2024-11-15 11:43:57.141730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.964 [2024-11-15 11:43:57.141954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.964 [2024-11-15 11:43:57.141982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.964 [2024-11-15 11:43:57.147722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.964 [2024-11-15 11:43:57.147903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.964 [2024-11-15 11:43:57.147931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.964 [2024-11-15 11:43:57.153086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.964 [2024-11-15 11:43:57.153258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.964 [2024-11-15 11:43:57.153286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.964 [2024-11-15 11:43:57.158170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.964 [2024-11-15 11:43:57.158408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.964 [2024-11-15 11:43:57.158437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.964 [2024-11-15 11:43:57.162726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.964 [2024-11-15 11:43:57.162918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.964 [2024-11-15 11:43:57.162945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.167753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.167981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.168008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.172958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.173191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.173219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.178213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.178402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.178430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.183224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.183500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.183528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.188503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.188680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.188708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.193664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.193881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.193909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.198878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.199058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.199086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.204309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.204489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.204516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.208929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.209062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.209089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.213447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.213631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.213659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.218868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.219049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.219077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.223237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.223398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.223426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.227458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.227570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.227598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.231631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.231743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.231771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.235885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.236017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.236046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.240127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.240252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.240279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.244384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.244514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.244548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.248733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.248855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.248882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.253073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.253185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.253212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.257414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.257540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.257568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.261803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.261916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.261944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.266115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.266258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.266286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.270429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.270562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.270589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.274663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.274794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.274821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.278935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.279048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.279076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.283220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.283354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.283381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.965 [2024-11-15 11:43:57.287463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.965 [2024-11-15 11:43:57.287598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.965 [2024-11-15 11:43:57.287625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.291769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.291907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.291935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.295986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.296153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.296180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.300250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.300371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.300398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.304611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.304734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.304762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.308897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.309008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.309036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.313263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.313376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.313403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.317604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.317746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.317773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.321889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.322026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.322054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.326112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.326242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.326270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.330437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.330561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.330589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.334751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.334889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.334916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.339047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.339161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.339189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.343290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.343419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.343446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.347566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.347684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.347712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.351830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.351956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.351983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.356144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.356301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.356352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.360449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.360595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.360622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.364773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.364917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.364944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.368967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.369066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.369094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.373263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.373394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.373422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.377596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.377724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.377753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.381870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.382000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.382027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:16.966 [2024-11-15 11:43:57.386048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:16.966 [2024-11-15 11:43:57.386159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.966 [2024-11-15 11:43:57.386186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.390277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.390441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.390469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.394525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.394668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.394696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.398886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.399006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.399033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.403124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.403250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.403278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.407512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.407651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.407679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.411990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.412109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.412136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.416288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.416407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.416436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.421090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.421207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.421234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.425656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.425777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.425805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.430035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.430157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.430184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.434273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.434385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.434413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.438511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.438647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.438676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.442812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.442929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.442956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.447002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.447171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.447199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.451208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.451351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.451378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.455436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.455553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.455581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.459745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.459861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.459888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.464107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.464237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.464264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.468361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.468475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.468513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.472756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.472853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.472881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.477092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.477196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.477222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.481255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.481421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.481450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.485480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.485595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.485622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.489745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.489897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.489923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.494000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.494131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.494158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.498276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.227 [2024-11-15 11:43:57.498379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.227 [2024-11-15 11:43:57.498405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.227 [2024-11-15 11:43:57.502574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.502709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.502736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.506813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.506919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.506944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.511133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.511262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.511289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.515365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.515499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.515527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.519580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.519710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.519738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.523902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.524032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.524060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.528206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.528333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.528361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.532449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.532568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.532595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.536755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.536909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.536936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.541030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.541165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.541192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.545428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.545561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.545589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.549700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.549828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.549855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.554014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.554167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.554195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.558335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.558465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.558492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.562768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.562886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.562913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.567011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.567163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.567191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.571473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.571610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.571637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.576763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.576940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.576968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.581706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.581818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.581852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.587312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.587516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.587544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.592653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.592872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.592900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.597819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.598094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.598122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.603298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.603551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.603579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.608644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.608883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.608911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.613207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.613357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.613385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.617614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.617741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.617769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.621946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.228 [2024-11-15 11:43:57.622090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.228 [2024-11-15 11:43:57.622118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.228 [2024-11-15 11:43:57.626298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.229 [2024-11-15 11:43:57.626429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.229 [2024-11-15 11:43:57.626456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.229 [2024-11-15 11:43:57.630660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.229 [2024-11-15 11:43:57.630810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.229 [2024-11-15 11:43:57.630838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.229 [2024-11-15 11:43:57.635157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.229 [2024-11-15 11:43:57.635332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.229 [2024-11-15 11:43:57.635360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.229 [2024-11-15 11:43:57.640122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.229 [2024-11-15 11:43:57.640237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.229 [2024-11-15 11:43:57.640265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.229 [2024-11-15 11:43:57.645327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.229 [2024-11-15 11:43:57.645486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.229 [2024-11-15 11:43:57.645514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.229 [2024-11-15 11:43:57.649523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.229 [2024-11-15 11:43:57.649667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.229 [2024-11-15 11:43:57.649694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.487 [2024-11-15 11:43:57.653824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.487 [2024-11-15 11:43:57.653976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.487 [2024-11-15 11:43:57.654003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.487 [2024-11-15 11:43:57.658066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.488 [2024-11-15 11:43:57.658205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.488 [2024-11-15 11:43:57.658233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.488 [2024-11-15 11:43:57.662317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.488 [2024-11-15 11:43:57.662464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.488 [2024-11-15 11:43:57.662492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.488 [2024-11-15 11:43:57.666581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.488 [2024-11-15 11:43:57.666726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.488 [2024-11-15 11:43:57.666754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.488 [2024-11-15 11:43:57.670882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.488 [2024-11-15 11:43:57.671018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.488 [2024-11-15 11:43:57.671045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.488 [2024-11-15 11:43:57.675083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.488 [2024-11-15 11:43:57.675227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.488 [2024-11-15 11:43:57.675255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.488 [2024-11-15 11:43:57.679318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.488 [2024-11-15 11:43:57.679461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.488 [2024-11-15 11:43:57.679489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.488 [2024-11-15 11:43:57.683608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.488 [2024-11-15 11:43:57.683755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.488 [2024-11-15 11:43:57.683782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.488 [2024-11-15 11:43:57.687866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.488 [2024-11-15 11:43:57.688007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.488 [2024-11-15 11:43:57.688035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.488 [2024-11-15 11:43:57.692114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.488 [2024-11-15 11:43:57.692252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.488 [2024-11-15 11:43:57.692279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.488 [2024-11-15 11:43:57.696310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.488 [2024-11-15 11:43:57.696440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.488 [2024-11-15 11:43:57.696468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:17.488 [2024-11-15 11:43:57.700645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.488 [2024-11-15 11:43:57.700792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.488 [2024-11-15 11:43:57.700826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:17.488 [2024-11-15 11:43:57.705272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.488 [2024-11-15 11:43:57.705473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.488 [2024-11-15 11:43:57.705501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:17.488 6112.50 IOPS, 764.06 MiB/s [2024-11-15T10:43:57.915Z] [2024-11-15 11:43:57.711765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23da550) with pdu=0x2000166ff3c8 00:25:17.488 [2024-11-15 11:43:57.712009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.488 [2024-11-15 11:43:57.712036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:17.488 00:25:17.488 Latency(us) 00:25:17.488 [2024-11-15T10:43:57.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.488 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:17.488 nvme0n1 : 2.00 6108.29 763.54 0.00 0.00 2612.20 1966.08 7136.14 00:25:17.488 [2024-11-15T10:43:57.915Z] =================================================================================================================== 00:25:17.488 [2024-11-15T10:43:57.915Z] Total : 6108.29 763.54 0.00 0.00 2612.20 1966.08 7136.14 00:25:17.488 { 00:25:17.488 "results": [ 00:25:17.488 { 00:25:17.488 "job": "nvme0n1", 00:25:17.488 "core_mask": "0x2", 00:25:17.488 "workload": "randwrite", 00:25:17.488 "status": "finished", 00:25:17.488 "queue_depth": 16, 00:25:17.488 "io_size": 131072, 00:25:17.488 "runtime": 2.004652, 00:25:17.488 "iops": 6108.292112546217, 00:25:17.488 "mibps": 763.5365140682771, 00:25:17.488 "io_failed": 0, 00:25:17.488 "io_timeout": 0, 00:25:17.488 "avg_latency_us": 2612.20323288417, 00:25:17.488 "min_latency_us": 1966.08, 00:25:17.488 "max_latency_us": 7136.142222222222 00:25:17.488 } 00:25:17.488 ], 00:25:17.488 "core_count": 1 00:25:17.488 } 00:25:17.488 11:43:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:17.488 11:43:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:17.488 11:43:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:17.488 11:43:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:17.488 | .driver_specific 00:25:17.488 | .nvme_error 00:25:17.488 | .status_code 00:25:17.488 | .command_transient_transport_error' 00:25:17.746 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 395 > 0 )) 00:25:17.746 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3033334 00:25:17.746 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3033334 ']' 00:25:17.746 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3033334 00:25:17.746 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:17.746 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:17.746 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3033334 00:25:17.746 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:17.746 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:17.746 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3033334' 00:25:17.746 killing process with pid 3033334 00:25:17.746 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3033334 00:25:17.746 Received shutdown signal, test time was about 2.000000 seconds 00:25:17.746 00:25:17.746 Latency(us) 00:25:17.746 [2024-11-15T10:43:58.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.746 [2024-11-15T10:43:58.173Z] =================================================================================================================== 00:25:17.746 [2024-11-15T10:43:58.173Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:17.746 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3033334 00:25:18.004 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3031951 00:25:18.004 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3031951 ']' 00:25:18.004 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3031951 00:25:18.004 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:18.004 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.004 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3031951 00:25:18.004 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:18.004 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:18.004 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3031951' 00:25:18.004 killing process with pid 3031951 00:25:18.004 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3031951 00:25:18.004 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3031951 00:25:18.264 00:25:18.264 real 0m15.614s 00:25:18.264 user 0m31.464s 00:25:18.264 sys 0m4.237s 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:18.264 ************************************ 00:25:18.264 END TEST nvmf_digest_error 00:25:18.264 ************************************ 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:18.264 rmmod nvme_tcp 00:25:18.264 rmmod nvme_fabrics 00:25:18.264 rmmod nvme_keyring 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3031951 ']' 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3031951 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3031951 ']' 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3031951 00:25:18.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3031951) - No such process 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3031951 is not found' 00:25:18.264 Process with pid 3031951 is not found 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.264 11:43:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:20.798 00:25:20.798 real 0m36.119s 00:25:20.798 user 1m4.510s 00:25:20.798 sys 0m10.149s 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:20.798 ************************************ 00:25:20.798 END TEST nvmf_digest 00:25:20.798 ************************************ 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.798 ************************************ 00:25:20.798 START TEST nvmf_bdevperf 00:25:20.798 ************************************ 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:20.798 * Looking for test storage... 00:25:20.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:25:20.798 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:20.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.799 --rc genhtml_branch_coverage=1 00:25:20.799 --rc genhtml_function_coverage=1 00:25:20.799 --rc genhtml_legend=1 00:25:20.799 --rc geninfo_all_blocks=1 00:25:20.799 --rc geninfo_unexecuted_blocks=1 00:25:20.799 00:25:20.799 ' 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:20.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.799 --rc genhtml_branch_coverage=1 00:25:20.799 --rc genhtml_function_coverage=1 00:25:20.799 --rc genhtml_legend=1 00:25:20.799 --rc geninfo_all_blocks=1 00:25:20.799 --rc geninfo_unexecuted_blocks=1 00:25:20.799 00:25:20.799 ' 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:20.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.799 --rc genhtml_branch_coverage=1 00:25:20.799 --rc genhtml_function_coverage=1 00:25:20.799 --rc genhtml_legend=1 00:25:20.799 --rc geninfo_all_blocks=1 00:25:20.799 --rc geninfo_unexecuted_blocks=1 00:25:20.799 00:25:20.799 ' 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:20.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.799 --rc genhtml_branch_coverage=1 00:25:20.799 --rc genhtml_function_coverage=1 00:25:20.799 --rc genhtml_legend=1 00:25:20.799 --rc geninfo_all_blocks=1 00:25:20.799 --rc geninfo_unexecuted_blocks=1 00:25:20.799 00:25:20.799 ' 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:20.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:20.799 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:20.800 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:20.800 11:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.700 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:22.701 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:22.701 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:22.701 Found net devices under 0000:09:00.0: cvl_0_0 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:22.701 Found net devices under 0000:09:00.1: cvl_0_1 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:22.701 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:22.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:25:22.960 00:25:22.960 --- 10.0.0.2 ping statistics --- 00:25:22.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.960 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:22.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:25:22.960 00:25:22.960 --- 10.0.0.1 ping statistics --- 00:25:22.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.960 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3035809 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3035809 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3035809 ']' 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:22.960 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.960 [2024-11-15 11:44:03.234676] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:25:22.960 [2024-11-15 11:44:03.234751] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.960 [2024-11-15 11:44:03.305105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:22.960 [2024-11-15 11:44:03.363079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.960 [2024-11-15 11:44:03.363131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.960 [2024-11-15 11:44:03.363159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.960 [2024-11-15 11:44:03.363170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.960 [2024-11-15 11:44:03.363179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.960 [2024-11-15 11:44:03.365010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.960 [2024-11-15 11:44:03.365064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:22.960 [2024-11-15 11:44:03.365067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.218 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:23.218 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:23.218 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:23.219 [2024-11-15 11:44:03.498607] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:23.219 Malloc0 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:23.219 [2024-11-15 11:44:03.554329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:23.219 { 00:25:23.219 "params": { 00:25:23.219 "name": "Nvme$subsystem", 00:25:23.219 "trtype": "$TEST_TRANSPORT", 00:25:23.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.219 "adrfam": "ipv4", 00:25:23.219 "trsvcid": "$NVMF_PORT", 00:25:23.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.219 "hdgst": ${hdgst:-false}, 00:25:23.219 "ddgst": ${ddgst:-false} 00:25:23.219 }, 00:25:23.219 "method": "bdev_nvme_attach_controller" 00:25:23.219 } 00:25:23.219 EOF 00:25:23.219 )") 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:23.219 11:44:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:23.219 "params": { 00:25:23.219 "name": "Nvme1", 00:25:23.219 "trtype": "tcp", 00:25:23.219 "traddr": "10.0.0.2", 00:25:23.219 "adrfam": "ipv4", 00:25:23.219 "trsvcid": "4420", 00:25:23.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:23.219 "hdgst": false, 00:25:23.219 "ddgst": false 00:25:23.219 }, 00:25:23.219 "method": "bdev_nvme_attach_controller" 00:25:23.219 }' 00:25:23.219 [2024-11-15 11:44:03.601775] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:25:23.219 [2024-11-15 11:44:03.601848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035837 ] 00:25:23.477 [2024-11-15 11:44:03.670447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.477 [2024-11-15 11:44:03.731442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.735 Running I/O for 1 seconds... 00:25:24.668 8473.00 IOPS, 33.10 MiB/s 00:25:24.668 Latency(us) 00:25:24.668 [2024-11-15T10:44:05.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.668 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:24.668 Verification LBA range: start 0x0 length 0x4000 00:25:24.668 Nvme1n1 : 1.02 8505.89 33.23 0.00 0.00 14985.55 3155.44 14854.83 00:25:24.668 [2024-11-15T10:44:05.095Z] =================================================================================================================== 00:25:24.668 [2024-11-15T10:44:05.095Z] Total : 8505.89 33.23 0.00 0.00 14985.55 3155.44 14854.83 00:25:24.926 11:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3036096 00:25:24.926 11:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:24.926 11:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:24.926 11:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:24.926 11:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:24.926 11:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:24.926 11:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:24.926 11:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:24.926 { 00:25:24.926 "params": { 00:25:24.926 "name": "Nvme$subsystem", 00:25:24.926 "trtype": "$TEST_TRANSPORT", 00:25:24.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.926 "adrfam": "ipv4", 00:25:24.926 "trsvcid": "$NVMF_PORT", 00:25:24.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.926 "hdgst": ${hdgst:-false}, 00:25:24.926 "ddgst": ${ddgst:-false} 00:25:24.926 }, 00:25:24.926 "method": "bdev_nvme_attach_controller" 00:25:24.926 } 00:25:24.926 EOF 00:25:24.926 )") 00:25:24.926 11:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:24.926 11:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:24.926 11:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:24.926 11:44:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:24.926 "params": { 00:25:24.926 "name": "Nvme1", 00:25:24.926 "trtype": "tcp", 00:25:24.926 "traddr": "10.0.0.2", 00:25:24.926 "adrfam": "ipv4", 00:25:24.926 "trsvcid": "4420", 00:25:24.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:24.926 "hdgst": false, 00:25:24.926 "ddgst": false 00:25:24.926 }, 00:25:24.926 "method": "bdev_nvme_attach_controller" 00:25:24.926 }' 00:25:24.926 [2024-11-15 11:44:05.255735] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:25:24.926 [2024-11-15 11:44:05.255810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3036096 ] 00:25:24.926 [2024-11-15 11:44:05.325006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.184 [2024-11-15 11:44:05.392541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.184 Running I/O for 15 seconds... 00:25:27.490 8390.00 IOPS, 32.77 MiB/s [2024-11-15T10:44:08.486Z] 8381.00 IOPS, 32.74 MiB/s [2024-11-15T10:44:08.486Z] 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3035809 00:25:28.059 11:44:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:28.059 [2024-11-15 11:44:08.222546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.222609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.222637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.222672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.222690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.222707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.222725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.222741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.222758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.222790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.222806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.222821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.222839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.222854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.222886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.222902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.222917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.222931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.222945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.222959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.222975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.222991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.059 [2024-11-15 11:44:08.223657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.059 [2024-11-15 11:44:08.223686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.060 [2024-11-15 11:44:08.223698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.223712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.060 [2024-11-15 11:44:08.223739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.223754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.060 [2024-11-15 11:44:08.223771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.223785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.060 [2024-11-15 11:44:08.223797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.223810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.060 [2024-11-15 11:44:08.223823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.223836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.060 [2024-11-15 11:44:08.223849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.223862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.060 [2024-11-15 11:44:08.223874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.223892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.060 [2024-11-15 11:44:08.223905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.223919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.060 [2024-11-15 11:44:08.223931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.223944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.060 [2024-11-15 11:44:08.223957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.223970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.060 [2024-11-15 11:44:08.223982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.223996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.060 [2024-11-15 11:44:08.224843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.060 [2024-11-15 11:44:08.224857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.224869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.224882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.224895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.224908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.224920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.224933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.224946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.224959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.224971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.224987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.225000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.225026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.225052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.225077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.225102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.225128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.225155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.225180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.225206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.225232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.225258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.225299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.061 [2024-11-15 11:44:08.225344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.061 [2024-11-15 11:44:08.225954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.061 [2024-11-15 11:44:08.225968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.225981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.225995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.226008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.226035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.226066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.226099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.226125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.226152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.226178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.226205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.226231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.226257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.226283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.226341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.226371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.226402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.226432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.226470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.062 [2024-11-15 11:44:08.226502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b2bb0 is same with the state(6) to be set 00:25:28.062 [2024-11-15 11:44:08.226533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:28.062 [2024-11-15 11:44:08.226545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:28.062 [2024-11-15 11:44:08.226562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45024 len:8 PRP1 0x0 PRP2 0x0 00:25:28.062 [2024-11-15 11:44:08.226576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:28.062 [2024-11-15 11:44:08.226765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:28.062 [2024-11-15 11:44:08.226792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:28.062 [2024-11-15 11:44:08.226835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:28.062 [2024-11-15 11:44:08.226876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.062 [2024-11-15 11:44:08.226889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.062 [2024-11-15 11:44:08.230202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.062 [2024-11-15 11:44:08.230236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.062 [2024-11-15 11:44:08.230910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.062 [2024-11-15 11:44:08.230946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.062 [2024-11-15 11:44:08.230963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.062 [2024-11-15 11:44:08.231199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.062 [2024-11-15 11:44:08.231435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.062 [2024-11-15 11:44:08.231457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.062 [2024-11-15 11:44:08.231473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.062 [2024-11-15 11:44:08.231488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.062 [2024-11-15 11:44:08.243804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.062 [2024-11-15 11:44:08.244226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.062 [2024-11-15 11:44:08.244276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.062 [2024-11-15 11:44:08.244293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.062 [2024-11-15 11:44:08.244536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.062 [2024-11-15 11:44:08.244779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.062 [2024-11-15 11:44:08.244798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.062 [2024-11-15 11:44:08.244811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.062 [2024-11-15 11:44:08.244823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.062 [2024-11-15 11:44:08.256993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.062 [2024-11-15 11:44:08.257400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.062 [2024-11-15 11:44:08.257430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.062 [2024-11-15 11:44:08.257447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.062 [2024-11-15 11:44:08.257676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.062 [2024-11-15 11:44:08.257885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.062 [2024-11-15 11:44:08.257904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.062 [2024-11-15 11:44:08.257918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.062 [2024-11-15 11:44:08.257929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.062 [2024-11-15 11:44:08.270174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.062 [2024-11-15 11:44:08.270612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.062 [2024-11-15 11:44:08.270640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.062 [2024-11-15 11:44:08.270656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.062 [2024-11-15 11:44:08.270893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.062 [2024-11-15 11:44:08.271103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.063 [2024-11-15 11:44:08.271122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.063 [2024-11-15 11:44:08.271135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.063 [2024-11-15 11:44:08.271146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.063 [2024-11-15 11:44:08.283467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.063 [2024-11-15 11:44:08.283854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.063 [2024-11-15 11:44:08.283897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.063 [2024-11-15 11:44:08.283919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.063 [2024-11-15 11:44:08.284186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.063 [2024-11-15 11:44:08.284428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.063 [2024-11-15 11:44:08.284450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.063 [2024-11-15 11:44:08.284464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.063 [2024-11-15 11:44:08.284476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.063 [2024-11-15 11:44:08.296636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.063 [2024-11-15 11:44:08.297000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.063 [2024-11-15 11:44:08.297030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.063 [2024-11-15 11:44:08.297047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.063 [2024-11-15 11:44:08.297287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.063 [2024-11-15 11:44:08.297497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.063 [2024-11-15 11:44:08.297517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.063 [2024-11-15 11:44:08.297530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.063 [2024-11-15 11:44:08.297542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.063 [2024-11-15 11:44:08.309892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.063 [2024-11-15 11:44:08.310259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.063 [2024-11-15 11:44:08.310308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.063 [2024-11-15 11:44:08.310326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.063 [2024-11-15 11:44:08.310594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.063 [2024-11-15 11:44:08.310803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.063 [2024-11-15 11:44:08.310823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.063 [2024-11-15 11:44:08.310835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.063 [2024-11-15 11:44:08.310847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.063 [2024-11-15 11:44:08.323044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.063 [2024-11-15 11:44:08.323443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.063 [2024-11-15 11:44:08.323472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.063 [2024-11-15 11:44:08.323488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.063 [2024-11-15 11:44:08.323731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.063 [2024-11-15 11:44:08.323930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.063 [2024-11-15 11:44:08.323949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.063 [2024-11-15 11:44:08.323963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.063 [2024-11-15 11:44:08.323975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.063 [2024-11-15 11:44:08.336100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.063 [2024-11-15 11:44:08.336531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.063 [2024-11-15 11:44:08.336557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.063 [2024-11-15 11:44:08.336572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.063 [2024-11-15 11:44:08.336821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.063 [2024-11-15 11:44:08.337014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.063 [2024-11-15 11:44:08.337033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.063 [2024-11-15 11:44:08.337045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.063 [2024-11-15 11:44:08.337056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.063 [2024-11-15 11:44:08.349245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.063 [2024-11-15 11:44:08.349617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.063 [2024-11-15 11:44:08.349646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.063 [2024-11-15 11:44:08.349678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.063 [2024-11-15 11:44:08.349898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.063 [2024-11-15 11:44:08.350107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.063 [2024-11-15 11:44:08.350126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.063 [2024-11-15 11:44:08.350139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.063 [2024-11-15 11:44:08.350150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.063 [2024-11-15 11:44:08.362396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.063 [2024-11-15 11:44:08.362850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.063 [2024-11-15 11:44:08.362892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.063 [2024-11-15 11:44:08.362909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.063 [2024-11-15 11:44:08.363147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.063 [2024-11-15 11:44:08.363387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.063 [2024-11-15 11:44:08.363409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.063 [2024-11-15 11:44:08.363427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.063 [2024-11-15 11:44:08.363439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.063 [2024-11-15 11:44:08.375476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.063 [2024-11-15 11:44:08.375821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.063 [2024-11-15 11:44:08.375849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.063 [2024-11-15 11:44:08.375865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.063 [2024-11-15 11:44:08.376089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.063 [2024-11-15 11:44:08.376298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.063 [2024-11-15 11:44:08.376344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.063 [2024-11-15 11:44:08.376358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.063 [2024-11-15 11:44:08.376370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.063 [2024-11-15 11:44:08.388652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.063 [2024-11-15 11:44:08.389019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.064 [2024-11-15 11:44:08.389063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.064 [2024-11-15 11:44:08.389079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.064 [2024-11-15 11:44:08.389337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.064 [2024-11-15 11:44:08.389540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.064 [2024-11-15 11:44:08.389560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.064 [2024-11-15 11:44:08.389573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.064 [2024-11-15 11:44:08.389586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.064 [2024-11-15 11:44:08.401844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.064 [2024-11-15 11:44:08.402266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.064 [2024-11-15 11:44:08.402314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.064 [2024-11-15 11:44:08.402333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.064 [2024-11-15 11:44:08.402573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.064 [2024-11-15 11:44:08.402819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.064 [2024-11-15 11:44:08.402839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.064 [2024-11-15 11:44:08.402851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.064 [2024-11-15 11:44:08.402863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.064 [2024-11-15 11:44:08.415073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.064 [2024-11-15 11:44:08.415463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.064 [2024-11-15 11:44:08.415492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.064 [2024-11-15 11:44:08.415509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.064 [2024-11-15 11:44:08.415747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.064 [2024-11-15 11:44:08.415976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.064 [2024-11-15 11:44:08.415996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.064 [2024-11-15 11:44:08.416009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.064 [2024-11-15 11:44:08.416020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.064 [2024-11-15 11:44:08.428268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.064 [2024-11-15 11:44:08.428617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.064 [2024-11-15 11:44:08.428645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.064 [2024-11-15 11:44:08.428661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.064 [2024-11-15 11:44:08.428883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.064 [2024-11-15 11:44:08.429092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.064 [2024-11-15 11:44:08.429112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.064 [2024-11-15 11:44:08.429124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.064 [2024-11-15 11:44:08.429135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.064 [2024-11-15 11:44:08.441505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.064 [2024-11-15 11:44:08.441890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.064 [2024-11-15 11:44:08.441932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.064 [2024-11-15 11:44:08.441948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.064 [2024-11-15 11:44:08.442195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.064 [2024-11-15 11:44:08.442455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.064 [2024-11-15 11:44:08.442477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.064 [2024-11-15 11:44:08.442492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.064 [2024-11-15 11:44:08.442505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.064 [2024-11-15 11:44:08.454559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.064 [2024-11-15 11:44:08.454994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.064 [2024-11-15 11:44:08.455036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.064 [2024-11-15 11:44:08.455060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.064 [2024-11-15 11:44:08.455299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.064 [2024-11-15 11:44:08.455541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.064 [2024-11-15 11:44:08.455561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.064 [2024-11-15 11:44:08.455574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.064 [2024-11-15 11:44:08.455586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.064 [2024-11-15 11:44:08.467747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.064 [2024-11-15 11:44:08.468067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.064 [2024-11-15 11:44:08.468094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.064 [2024-11-15 11:44:08.468110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.064 [2024-11-15 11:44:08.468333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.064 [2024-11-15 11:44:08.468550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.064 [2024-11-15 11:44:08.468571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.064 [2024-11-15 11:44:08.468584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.064 [2024-11-15 11:44:08.468596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.323 [2024-11-15 11:44:08.481627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.323 [2024-11-15 11:44:08.482063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.323 [2024-11-15 11:44:08.482093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.323 [2024-11-15 11:44:08.482110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.323 [2024-11-15 11:44:08.482359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.323 [2024-11-15 11:44:08.482574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.323 [2024-11-15 11:44:08.482595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.323 [2024-11-15 11:44:08.482625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.323 [2024-11-15 11:44:08.482638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.323 [2024-11-15 11:44:08.495557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.323 [2024-11-15 11:44:08.495943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.323 [2024-11-15 11:44:08.495974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.323 [2024-11-15 11:44:08.495992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.323 [2024-11-15 11:44:08.496221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.323 [2024-11-15 11:44:08.496508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.323 [2024-11-15 11:44:08.496532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.323 [2024-11-15 11:44:08.496547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.323 [2024-11-15 11:44:08.496560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.323 [2024-11-15 11:44:08.508843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.323 [2024-11-15 11:44:08.509190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.323 [2024-11-15 11:44:08.509219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.323 [2024-11-15 11:44:08.509236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.323 [2024-11-15 11:44:08.509490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.323 [2024-11-15 11:44:08.509690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.323 [2024-11-15 11:44:08.509710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.323 [2024-11-15 11:44:08.509723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.323 [2024-11-15 11:44:08.509735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.323 [2024-11-15 11:44:08.521972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.323 [2024-11-15 11:44:08.522378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.323 [2024-11-15 11:44:08.522406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.323 [2024-11-15 11:44:08.522436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.323 [2024-11-15 11:44:08.522672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.323 [2024-11-15 11:44:08.522882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.323 [2024-11-15 11:44:08.522902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.323 [2024-11-15 11:44:08.522914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.323 [2024-11-15 11:44:08.522926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.323 [2024-11-15 11:44:08.535195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.323 [2024-11-15 11:44:08.535601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.323 [2024-11-15 11:44:08.535645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.324 [2024-11-15 11:44:08.535662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.324 [2024-11-15 11:44:08.535914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.324 [2024-11-15 11:44:08.536121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.324 [2024-11-15 11:44:08.536141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.324 [2024-11-15 11:44:08.536158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.324 [2024-11-15 11:44:08.536171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.324 [2024-11-15 11:44:08.548342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.324 [2024-11-15 11:44:08.548787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.324 [2024-11-15 11:44:08.548830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.324 [2024-11-15 11:44:08.548846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.324 [2024-11-15 11:44:08.549085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.324 [2024-11-15 11:44:08.549292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.324 [2024-11-15 11:44:08.549336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.324 [2024-11-15 11:44:08.549350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.324 [2024-11-15 11:44:08.549362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.324 [2024-11-15 11:44:08.561394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.324 [2024-11-15 11:44:08.561756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.324 [2024-11-15 11:44:08.561783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.324 [2024-11-15 11:44:08.561799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.324 [2024-11-15 11:44:08.562021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.324 [2024-11-15 11:44:08.562231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.324 [2024-11-15 11:44:08.562251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.324 [2024-11-15 11:44:08.562263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.324 [2024-11-15 11:44:08.562274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.324 [2024-11-15 11:44:08.574354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.324 [2024-11-15 11:44:08.574719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.324 [2024-11-15 11:44:08.574763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.324 [2024-11-15 11:44:08.574779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.324 [2024-11-15 11:44:08.575031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.324 [2024-11-15 11:44:08.575239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.324 [2024-11-15 11:44:08.575258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.324 [2024-11-15 11:44:08.575270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.324 [2024-11-15 11:44:08.575297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.324 [2024-11-15 11:44:08.587445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.324 [2024-11-15 11:44:08.587948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.324 [2024-11-15 11:44:08.587991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.324 [2024-11-15 11:44:08.588009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.324 [2024-11-15 11:44:08.588260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.324 [2024-11-15 11:44:08.588501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.324 [2024-11-15 11:44:08.588522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.324 [2024-11-15 11:44:08.588535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.324 [2024-11-15 11:44:08.588547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.324 7321.67 IOPS, 28.60 MiB/s [2024-11-15T10:44:08.751Z] [2024-11-15 11:44:08.602186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.324 [2024-11-15 11:44:08.602645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.324 [2024-11-15 11:44:08.602688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.324 [2024-11-15 11:44:08.602704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.324 [2024-11-15 11:44:08.602970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.324 [2024-11-15 11:44:08.603184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.324 [2024-11-15 11:44:08.603204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.324 [2024-11-15 11:44:08.603217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.324 [2024-11-15 11:44:08.603228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.324 [2024-11-15 11:44:08.615297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.324 [2024-11-15 11:44:08.615670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.324 [2024-11-15 11:44:08.615713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.324 [2024-11-15 11:44:08.615728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.324 [2024-11-15 11:44:08.615975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.324 [2024-11-15 11:44:08.616184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.324 [2024-11-15 11:44:08.616203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.324 [2024-11-15 11:44:08.616215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.324 [2024-11-15 11:44:08.616226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.324 [2024-11-15 11:44:08.628373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.324 [2024-11-15 11:44:08.628736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.324 [2024-11-15 11:44:08.628764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.324 [2024-11-15 11:44:08.628786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.324 [2024-11-15 11:44:08.629027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.324 [2024-11-15 11:44:08.629234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.324 [2024-11-15 11:44:08.629253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.324 [2024-11-15 11:44:08.629266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.324 [2024-11-15 11:44:08.629277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.324 [2024-11-15 11:44:08.641363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.324 [2024-11-15 11:44:08.641726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.324 [2024-11-15 11:44:08.641754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.324 [2024-11-15 11:44:08.641769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.324 [2024-11-15 11:44:08.642003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.324 [2024-11-15 11:44:08.642212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.324 [2024-11-15 11:44:08.642232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.324 [2024-11-15 11:44:08.642244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.324 [2024-11-15 11:44:08.642256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.324 [2024-11-15 11:44:08.654601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.324 [2024-11-15 11:44:08.655089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.324 [2024-11-15 11:44:08.655116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.324 [2024-11-15 11:44:08.655147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.324 [2024-11-15 11:44:08.655420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.324 [2024-11-15 11:44:08.655626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.324 [2024-11-15 11:44:08.655646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.324 [2024-11-15 11:44:08.655659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.324 [2024-11-15 11:44:08.655671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.325 [2024-11-15 11:44:08.667697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.325 [2024-11-15 11:44:08.668189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.325 [2024-11-15 11:44:08.668232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.325 [2024-11-15 11:44:08.668248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.325 [2024-11-15 11:44:08.668498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.325 [2024-11-15 11:44:08.668736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.325 [2024-11-15 11:44:08.668756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.325 [2024-11-15 11:44:08.668768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.325 [2024-11-15 11:44:08.668780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.325 [2024-11-15 11:44:08.680719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.325 [2024-11-15 11:44:08.681064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.325 [2024-11-15 11:44:08.681126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.325 [2024-11-15 11:44:08.681142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.325 [2024-11-15 11:44:08.681389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.325 [2024-11-15 11:44:08.681623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.325 [2024-11-15 11:44:08.681643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.325 [2024-11-15 11:44:08.681656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.325 [2024-11-15 11:44:08.681668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.325 [2024-11-15 11:44:08.693726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.325 [2024-11-15 11:44:08.694100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.325 [2024-11-15 11:44:08.694143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.325 [2024-11-15 11:44:08.694158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.325 [2024-11-15 11:44:08.694439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.325 [2024-11-15 11:44:08.694664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.325 [2024-11-15 11:44:08.694684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.325 [2024-11-15 11:44:08.694698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.325 [2024-11-15 11:44:08.694710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.325 [2024-11-15 11:44:08.706969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.325 [2024-11-15 11:44:08.707332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.325 [2024-11-15 11:44:08.707375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.325 [2024-11-15 11:44:08.707391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.325 [2024-11-15 11:44:08.707644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.325 [2024-11-15 11:44:08.707851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.325 [2024-11-15 11:44:08.707870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.325 [2024-11-15 11:44:08.707887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.325 [2024-11-15 11:44:08.707899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.325 [2024-11-15 11:44:08.720026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.325 [2024-11-15 11:44:08.720362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.325 [2024-11-15 11:44:08.720390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.325 [2024-11-15 11:44:08.720406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.325 [2024-11-15 11:44:08.720628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.325 [2024-11-15 11:44:08.720837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.325 [2024-11-15 11:44:08.720856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.325 [2024-11-15 11:44:08.720868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.325 [2024-11-15 11:44:08.720879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.325 [2024-11-15 11:44:08.733152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.325 [2024-11-15 11:44:08.733583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.325 [2024-11-15 11:44:08.733629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.325 [2024-11-15 11:44:08.733645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.325 [2024-11-15 11:44:08.733915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.325 [2024-11-15 11:44:08.734120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.325 [2024-11-15 11:44:08.734141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.325 [2024-11-15 11:44:08.734153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.325 [2024-11-15 11:44:08.734165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.585 [2024-11-15 11:44:08.746861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.585 [2024-11-15 11:44:08.747199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.585 [2024-11-15 11:44:08.747229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.585 [2024-11-15 11:44:08.747245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.585 [2024-11-15 11:44:08.747500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.585 [2024-11-15 11:44:08.747757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.585 [2024-11-15 11:44:08.747792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.585 [2024-11-15 11:44:08.747806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.585 [2024-11-15 11:44:08.747818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.585 [2024-11-15 11:44:08.760205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.585 [2024-11-15 11:44:08.760605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.585 [2024-11-15 11:44:08.760641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.585 [2024-11-15 11:44:08.760672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.585 [2024-11-15 11:44:08.760905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.585 [2024-11-15 11:44:08.761098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.585 [2024-11-15 11:44:08.761117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.585 [2024-11-15 11:44:08.761129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.585 [2024-11-15 11:44:08.761141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.585 [2024-11-15 11:44:08.773606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.585 [2024-11-15 11:44:08.773981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.585 [2024-11-15 11:44:08.774031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.585 [2024-11-15 11:44:08.774046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.585 [2024-11-15 11:44:08.774279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.585 [2024-11-15 11:44:08.774518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.585 [2024-11-15 11:44:08.774540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.585 [2024-11-15 11:44:08.774555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.585 [2024-11-15 11:44:08.774568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.585 [2024-11-15 11:44:08.786980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.585 [2024-11-15 11:44:08.787325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.585 [2024-11-15 11:44:08.787355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.585 [2024-11-15 11:44:08.787372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.585 [2024-11-15 11:44:08.787602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.585 [2024-11-15 11:44:08.787819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.585 [2024-11-15 11:44:08.787839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.585 [2024-11-15 11:44:08.787852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.585 [2024-11-15 11:44:08.787864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.585 [2024-11-15 11:44:08.800324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.585 [2024-11-15 11:44:08.800774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.585 [2024-11-15 11:44:08.800819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.585 [2024-11-15 11:44:08.800841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.585 [2024-11-15 11:44:08.801083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.585 [2024-11-15 11:44:08.801298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.585 [2024-11-15 11:44:08.801328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.585 [2024-11-15 11:44:08.801341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.585 [2024-11-15 11:44:08.801370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.585 [2024-11-15 11:44:08.813646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.585 [2024-11-15 11:44:08.814020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.585 [2024-11-15 11:44:08.814063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.585 [2024-11-15 11:44:08.814078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.585 [2024-11-15 11:44:08.814343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.585 [2024-11-15 11:44:08.814570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.585 [2024-11-15 11:44:08.814605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.585 [2024-11-15 11:44:08.814619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.585 [2024-11-15 11:44:08.814631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.585 [2024-11-15 11:44:08.826816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.585 [2024-11-15 11:44:08.827191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.585 [2024-11-15 11:44:08.827234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.586 [2024-11-15 11:44:08.827250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.586 [2024-11-15 11:44:08.827502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.586 [2024-11-15 11:44:08.827737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.586 [2024-11-15 11:44:08.827757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.586 [2024-11-15 11:44:08.827770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.586 [2024-11-15 11:44:08.827782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.586 [2024-11-15 11:44:08.840094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.586 [2024-11-15 11:44:08.840491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.586 [2024-11-15 11:44:08.840535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.586 [2024-11-15 11:44:08.840551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.586 [2024-11-15 11:44:08.840803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.586 [2024-11-15 11:44:08.841006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.586 [2024-11-15 11:44:08.841026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.586 [2024-11-15 11:44:08.841039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.586 [2024-11-15 11:44:08.841050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.586 [2024-11-15 11:44:08.853312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.586 [2024-11-15 11:44:08.853723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.586 [2024-11-15 11:44:08.853751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.586 [2024-11-15 11:44:08.853767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.586 [2024-11-15 11:44:08.853988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.586 [2024-11-15 11:44:08.854203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.586 [2024-11-15 11:44:08.854223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.586 [2024-11-15 11:44:08.854236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.586 [2024-11-15 11:44:08.854247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.586 [2024-11-15 11:44:08.866633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.586 [2024-11-15 11:44:08.866973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.586 [2024-11-15 11:44:08.867001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.586 [2024-11-15 11:44:08.867017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.586 [2024-11-15 11:44:08.867257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.586 [2024-11-15 11:44:08.867526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.586 [2024-11-15 11:44:08.867548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.586 [2024-11-15 11:44:08.867562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.586 [2024-11-15 11:44:08.867574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.586 [2024-11-15 11:44:08.879790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.586 [2024-11-15 11:44:08.880100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.586 [2024-11-15 11:44:08.880127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.586 [2024-11-15 11:44:08.880142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.586 [2024-11-15 11:44:08.880388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.586 [2024-11-15 11:44:08.880630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.586 [2024-11-15 11:44:08.880651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.586 [2024-11-15 11:44:08.880669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.586 [2024-11-15 11:44:08.880682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.586 [2024-11-15 11:44:08.893021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.586 [2024-11-15 11:44:08.893461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.586 [2024-11-15 11:44:08.893491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.586 [2024-11-15 11:44:08.893507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.586 [2024-11-15 11:44:08.893749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.586 [2024-11-15 11:44:08.893948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.586 [2024-11-15 11:44:08.893967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.586 [2024-11-15 11:44:08.893980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.586 [2024-11-15 11:44:08.893992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.586 [2024-11-15 11:44:08.906254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.586 [2024-11-15 11:44:08.906623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.586 [2024-11-15 11:44:08.906666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.586 [2024-11-15 11:44:08.906682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.586 [2024-11-15 11:44:08.906931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.586 [2024-11-15 11:44:08.907129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.586 [2024-11-15 11:44:08.907149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.586 [2024-11-15 11:44:08.907161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.586 [2024-11-15 11:44:08.907173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.586 [2024-11-15 11:44:08.919545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.586 [2024-11-15 11:44:08.919921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.586 [2024-11-15 11:44:08.919949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.586 [2024-11-15 11:44:08.919964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.586 [2024-11-15 11:44:08.920185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.586 [2024-11-15 11:44:08.920448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.586 [2024-11-15 11:44:08.920470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.586 [2024-11-15 11:44:08.920483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.586 [2024-11-15 11:44:08.920495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.586 [2024-11-15 11:44:08.932858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.586 [2024-11-15 11:44:08.933232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.586 [2024-11-15 11:44:08.933275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.586 [2024-11-15 11:44:08.933292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.586 [2024-11-15 11:44:08.933542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.586 [2024-11-15 11:44:08.933761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.586 [2024-11-15 11:44:08.933782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.586 [2024-11-15 11:44:08.933794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.586 [2024-11-15 11:44:08.933806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.586 [2024-11-15 11:44:08.946096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.586 [2024-11-15 11:44:08.946495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.586 [2024-11-15 11:44:08.946538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.586 [2024-11-15 11:44:08.946553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.586 [2024-11-15 11:44:08.946787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.586 [2024-11-15 11:44:08.946986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.586 [2024-11-15 11:44:08.947005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.586 [2024-11-15 11:44:08.947019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.586 [2024-11-15 11:44:08.947030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.586 [2024-11-15 11:44:08.959264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.586 [2024-11-15 11:44:08.959661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.586 [2024-11-15 11:44:08.959690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.587 [2024-11-15 11:44:08.959707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.587 [2024-11-15 11:44:08.959947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.587 [2024-11-15 11:44:08.960146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.587 [2024-11-15 11:44:08.960165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.587 [2024-11-15 11:44:08.960178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.587 [2024-11-15 11:44:08.960191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.587 [2024-11-15 11:44:08.972639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.587 [2024-11-15 11:44:08.972983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.587 [2024-11-15 11:44:08.973010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.587 [2024-11-15 11:44:08.973031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.587 [2024-11-15 11:44:08.973253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.587 [2024-11-15 11:44:08.973501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.587 [2024-11-15 11:44:08.973523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.587 [2024-11-15 11:44:08.973536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.587 [2024-11-15 11:44:08.973548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.587 [2024-11-15 11:44:08.985957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.587 [2024-11-15 11:44:08.986378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.587 [2024-11-15 11:44:08.986413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.587 [2024-11-15 11:44:08.986434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.587 [2024-11-15 11:44:08.986664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.587 [2024-11-15 11:44:08.986897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.587 [2024-11-15 11:44:08.986919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.587 [2024-11-15 11:44:08.986933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.587 [2024-11-15 11:44:08.986945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.587 [2024-11-15 11:44:08.999423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.587 [2024-11-15 11:44:08.999804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.587 [2024-11-15 11:44:08.999834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.587 [2024-11-15 11:44:08.999851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.587 [2024-11-15 11:44:09.000079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.587 [2024-11-15 11:44:09.000326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.587 [2024-11-15 11:44:09.000349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.587 [2024-11-15 11:44:09.000363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.587 [2024-11-15 11:44:09.000376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.847 [2024-11-15 11:44:09.012674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.847 [2024-11-15 11:44:09.013028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.847 [2024-11-15 11:44:09.013058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.847 [2024-11-15 11:44:09.013075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.847 [2024-11-15 11:44:09.013288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.847 [2024-11-15 11:44:09.013554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.847 [2024-11-15 11:44:09.013577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.847 [2024-11-15 11:44:09.013591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.847 [2024-11-15 11:44:09.013604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.847 [2024-11-15 11:44:09.025941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.847 [2024-11-15 11:44:09.026316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.847 [2024-11-15 11:44:09.026345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.847 [2024-11-15 11:44:09.026362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.847 [2024-11-15 11:44:09.026575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.847 [2024-11-15 11:44:09.026806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.847 [2024-11-15 11:44:09.026826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.847 [2024-11-15 11:44:09.026839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.847 [2024-11-15 11:44:09.026851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.847 [2024-11-15 11:44:09.039253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.847 [2024-11-15 11:44:09.039609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.847 [2024-11-15 11:44:09.039638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.847 [2024-11-15 11:44:09.039654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.847 [2024-11-15 11:44:09.039876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.847 [2024-11-15 11:44:09.040091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.847 [2024-11-15 11:44:09.040111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.847 [2024-11-15 11:44:09.040124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.847 [2024-11-15 11:44:09.040136] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.847 [2024-11-15 11:44:09.052539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.847 [2024-11-15 11:44:09.052931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.847 [2024-11-15 11:44:09.052973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.847 [2024-11-15 11:44:09.052989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.847 [2024-11-15 11:44:09.053238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.847 [2024-11-15 11:44:09.053491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.847 [2024-11-15 11:44:09.053514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.847 [2024-11-15 11:44:09.053534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.847 [2024-11-15 11:44:09.053548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.847 [2024-11-15 11:44:09.065873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.847 [2024-11-15 11:44:09.066212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.847 [2024-11-15 11:44:09.066241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.847 [2024-11-15 11:44:09.066256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.847 [2024-11-15 11:44:09.066510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.847 [2024-11-15 11:44:09.066746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.847 [2024-11-15 11:44:09.066767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.847 [2024-11-15 11:44:09.066780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.847 [2024-11-15 11:44:09.066792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.847 [2024-11-15 11:44:09.079061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.847 [2024-11-15 11:44:09.079460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.847 [2024-11-15 11:44:09.079488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.847 [2024-11-15 11:44:09.079505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.847 [2024-11-15 11:44:09.079746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.847 [2024-11-15 11:44:09.079945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.847 [2024-11-15 11:44:09.079964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.847 [2024-11-15 11:44:09.079977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.847 [2024-11-15 11:44:09.079989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.847 [2024-11-15 11:44:09.092322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.847 [2024-11-15 11:44:09.092677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.847 [2024-11-15 11:44:09.092706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.847 [2024-11-15 11:44:09.092738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.847 [2024-11-15 11:44:09.092993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.847 [2024-11-15 11:44:09.093192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.847 [2024-11-15 11:44:09.093211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.847 [2024-11-15 11:44:09.093224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.847 [2024-11-15 11:44:09.093236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.847 [2024-11-15 11:44:09.105668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.847 [2024-11-15 11:44:09.106041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.847 [2024-11-15 11:44:09.106084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.847 [2024-11-15 11:44:09.106100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.847 [2024-11-15 11:44:09.106365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.847 [2024-11-15 11:44:09.106607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.847 [2024-11-15 11:44:09.106628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.848 [2024-11-15 11:44:09.106641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.848 [2024-11-15 11:44:09.106653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.848 [2024-11-15 11:44:09.118970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.848 [2024-11-15 11:44:09.119317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.848 [2024-11-15 11:44:09.119345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.848 [2024-11-15 11:44:09.119361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.848 [2024-11-15 11:44:09.119583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.848 [2024-11-15 11:44:09.119816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.848 [2024-11-15 11:44:09.119837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.848 [2024-11-15 11:44:09.119850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.848 [2024-11-15 11:44:09.119862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.848 [2024-11-15 11:44:09.132196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.848 [2024-11-15 11:44:09.132595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.848 [2024-11-15 11:44:09.132637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.848 [2024-11-15 11:44:09.132654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.848 [2024-11-15 11:44:09.132908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.848 [2024-11-15 11:44:09.133106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.848 [2024-11-15 11:44:09.133125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.848 [2024-11-15 11:44:09.133138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.848 [2024-11-15 11:44:09.133150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.848 [2024-11-15 11:44:09.145471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.848 [2024-11-15 11:44:09.145810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.848 [2024-11-15 11:44:09.145852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.848 [2024-11-15 11:44:09.145873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.848 [2024-11-15 11:44:09.146096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.848 [2024-11-15 11:44:09.146337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.848 [2024-11-15 11:44:09.146373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.848 [2024-11-15 11:44:09.146387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.848 [2024-11-15 11:44:09.146399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.848 [2024-11-15 11:44:09.158808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.848 [2024-11-15 11:44:09.159177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.848 [2024-11-15 11:44:09.159220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.848 [2024-11-15 11:44:09.159236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.848 [2024-11-15 11:44:09.159489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.848 [2024-11-15 11:44:09.159710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.848 [2024-11-15 11:44:09.159730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.848 [2024-11-15 11:44:09.159743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.848 [2024-11-15 11:44:09.159754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.848 [2024-11-15 11:44:09.172134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.848 [2024-11-15 11:44:09.172506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.848 [2024-11-15 11:44:09.172534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.848 [2024-11-15 11:44:09.172550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.848 [2024-11-15 11:44:09.172787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.848 [2024-11-15 11:44:09.173002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.848 [2024-11-15 11:44:09.173021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.848 [2024-11-15 11:44:09.173034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.848 [2024-11-15 11:44:09.173046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.848 [2024-11-15 11:44:09.185405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.848 [2024-11-15 11:44:09.185797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.848 [2024-11-15 11:44:09.185824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.848 [2024-11-15 11:44:09.185840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.848 [2024-11-15 11:44:09.186060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.848 [2024-11-15 11:44:09.186279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.848 [2024-11-15 11:44:09.186323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.848 [2024-11-15 11:44:09.186337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.848 [2024-11-15 11:44:09.186364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.848 [2024-11-15 11:44:09.198705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.848 [2024-11-15 11:44:09.199078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.848 [2024-11-15 11:44:09.199120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.848 [2024-11-15 11:44:09.199136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.848 [2024-11-15 11:44:09.199386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.848 [2024-11-15 11:44:09.199598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.848 [2024-11-15 11:44:09.199619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.848 [2024-11-15 11:44:09.199647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.848 [2024-11-15 11:44:09.199659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.848 [2024-11-15 11:44:09.211899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.848 [2024-11-15 11:44:09.212253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.848 [2024-11-15 11:44:09.212281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.848 [2024-11-15 11:44:09.212296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.848 [2024-11-15 11:44:09.212534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.848 [2024-11-15 11:44:09.212768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.848 [2024-11-15 11:44:09.212787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.848 [2024-11-15 11:44:09.212800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.848 [2024-11-15 11:44:09.212812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.848 [2024-11-15 11:44:09.225131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.848 [2024-11-15 11:44:09.225534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.848 [2024-11-15 11:44:09.225563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.848 [2024-11-15 11:44:09.225579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.848 [2024-11-15 11:44:09.225819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.848 [2024-11-15 11:44:09.226018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.848 [2024-11-15 11:44:09.226037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.848 [2024-11-15 11:44:09.226056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.848 [2024-11-15 11:44:09.226068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.849 [2024-11-15 11:44:09.238405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.849 [2024-11-15 11:44:09.238809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.849 [2024-11-15 11:44:09.238853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.849 [2024-11-15 11:44:09.238869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.849 [2024-11-15 11:44:09.239116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.849 [2024-11-15 11:44:09.239361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.849 [2024-11-15 11:44:09.239384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.849 [2024-11-15 11:44:09.239399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.849 [2024-11-15 11:44:09.239412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.849 [2024-11-15 11:44:09.251910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.849 [2024-11-15 11:44:09.252250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.849 [2024-11-15 11:44:09.252293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.849 [2024-11-15 11:44:09.252322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.849 [2024-11-15 11:44:09.252551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.849 [2024-11-15 11:44:09.252773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.849 [2024-11-15 11:44:09.252794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.849 [2024-11-15 11:44:09.252808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.849 [2024-11-15 11:44:09.252819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:28.849 [2024-11-15 11:44:09.265146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:28.849 [2024-11-15 11:44:09.265507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.849 [2024-11-15 11:44:09.265551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:28.849 [2024-11-15 11:44:09.265567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:28.849 [2024-11-15 11:44:09.265820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:28.849 [2024-11-15 11:44:09.266052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:28.849 [2024-11-15 11:44:09.266073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:28.849 [2024-11-15 11:44:09.266087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:28.849 [2024-11-15 11:44:09.266100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.108 [2024-11-15 11:44:09.278692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.108 [2024-11-15 11:44:09.279022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.108 [2024-11-15 11:44:09.279050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.108 [2024-11-15 11:44:09.279066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.108 [2024-11-15 11:44:09.279267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.108 [2024-11-15 11:44:09.279503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.109 [2024-11-15 11:44:09.279525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.109 [2024-11-15 11:44:09.279538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.109 [2024-11-15 11:44:09.279551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.109 [2024-11-15 11:44:09.291908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.109 [2024-11-15 11:44:09.292257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.109 [2024-11-15 11:44:09.292285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.109 [2024-11-15 11:44:09.292324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.109 [2024-11-15 11:44:09.292542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.109 [2024-11-15 11:44:09.292778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.109 [2024-11-15 11:44:09.292798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.109 [2024-11-15 11:44:09.292811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.109 [2024-11-15 11:44:09.292822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.109 [2024-11-15 11:44:09.305227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.109 [2024-11-15 11:44:09.305624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.109 [2024-11-15 11:44:09.305653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.109 [2024-11-15 11:44:09.305670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.109 [2024-11-15 11:44:09.305910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.109 [2024-11-15 11:44:09.306123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.109 [2024-11-15 11:44:09.306143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.109 [2024-11-15 11:44:09.306156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.109 [2024-11-15 11:44:09.306168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.109 [2024-11-15 11:44:09.318408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.109 [2024-11-15 11:44:09.318810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.109 [2024-11-15 11:44:09.318853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.109 [2024-11-15 11:44:09.318874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.109 [2024-11-15 11:44:09.319126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.109 [2024-11-15 11:44:09.319357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.109 [2024-11-15 11:44:09.319379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.109 [2024-11-15 11:44:09.319394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.109 [2024-11-15 11:44:09.319407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.109 [2024-11-15 11:44:09.331673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.109 [2024-11-15 11:44:09.332045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.109 [2024-11-15 11:44:09.332073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.109 [2024-11-15 11:44:09.332089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.109 [2024-11-15 11:44:09.332319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.109 [2024-11-15 11:44:09.332524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.109 [2024-11-15 11:44:09.332545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.109 [2024-11-15 11:44:09.332559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.109 [2024-11-15 11:44:09.332573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.109 [2024-11-15 11:44:09.344962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.109 [2024-11-15 11:44:09.345336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.109 [2024-11-15 11:44:09.345365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.109 [2024-11-15 11:44:09.345381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.109 [2024-11-15 11:44:09.345611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.109 [2024-11-15 11:44:09.345828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.109 [2024-11-15 11:44:09.345848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.109 [2024-11-15 11:44:09.345861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.109 [2024-11-15 11:44:09.345874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.109 [2024-11-15 11:44:09.358298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.109 [2024-11-15 11:44:09.358740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.109 [2024-11-15 11:44:09.358781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.109 [2024-11-15 11:44:09.358797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.109 [2024-11-15 11:44:09.359018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.109 [2024-11-15 11:44:09.359238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.109 [2024-11-15 11:44:09.359258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.109 [2024-11-15 11:44:09.359271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.109 [2024-11-15 11:44:09.359298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.109 [2024-11-15 11:44:09.371798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.109 [2024-11-15 11:44:09.372173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.109 [2024-11-15 11:44:09.372217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.109 [2024-11-15 11:44:09.372233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.109 [2024-11-15 11:44:09.372486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.109 [2024-11-15 11:44:09.372719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.109 [2024-11-15 11:44:09.372740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.109 [2024-11-15 11:44:09.372753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.109 [2024-11-15 11:44:09.372764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.109 [2024-11-15 11:44:09.385157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.109 [2024-11-15 11:44:09.385561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.109 [2024-11-15 11:44:09.385590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.109 [2024-11-15 11:44:09.385606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.109 [2024-11-15 11:44:09.385846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.109 [2024-11-15 11:44:09.386049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.109 [2024-11-15 11:44:09.386069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.109 [2024-11-15 11:44:09.386081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.109 [2024-11-15 11:44:09.386093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.109 [2024-11-15 11:44:09.398504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.109 [2024-11-15 11:44:09.398932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.109 [2024-11-15 11:44:09.398959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.109 [2024-11-15 11:44:09.398975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.109 [2024-11-15 11:44:09.399181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.109 [2024-11-15 11:44:09.399440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.109 [2024-11-15 11:44:09.399462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.109 [2024-11-15 11:44:09.399484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.110 [2024-11-15 11:44:09.399497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.110 [2024-11-15 11:44:09.411705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.110 [2024-11-15 11:44:09.412076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.110 [2024-11-15 11:44:09.412119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.110 [2024-11-15 11:44:09.412135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.110 [2024-11-15 11:44:09.412402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.110 [2024-11-15 11:44:09.412624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.110 [2024-11-15 11:44:09.412646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.110 [2024-11-15 11:44:09.412674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.110 [2024-11-15 11:44:09.412687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.110 [2024-11-15 11:44:09.424962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.110 [2024-11-15 11:44:09.425338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.110 [2024-11-15 11:44:09.425377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.110 [2024-11-15 11:44:09.425393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.110 [2024-11-15 11:44:09.425621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.110 [2024-11-15 11:44:09.425821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.110 [2024-11-15 11:44:09.425841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.110 [2024-11-15 11:44:09.425853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.110 [2024-11-15 11:44:09.425865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.110 [2024-11-15 11:44:09.438405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.110 [2024-11-15 11:44:09.438766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.110 [2024-11-15 11:44:09.438795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.110 [2024-11-15 11:44:09.438811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.110 [2024-11-15 11:44:09.439054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.110 [2024-11-15 11:44:09.439299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.110 [2024-11-15 11:44:09.439330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.110 [2024-11-15 11:44:09.439344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.110 [2024-11-15 11:44:09.439371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.110 [2024-11-15 11:44:09.451827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.110 [2024-11-15 11:44:09.452208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.110 [2024-11-15 11:44:09.452252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.110 [2024-11-15 11:44:09.452268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.110 [2024-11-15 11:44:09.452518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.110 [2024-11-15 11:44:09.452751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.110 [2024-11-15 11:44:09.452771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.110 [2024-11-15 11:44:09.452784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.110 [2024-11-15 11:44:09.452798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.110 [2024-11-15 11:44:09.465055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.110 [2024-11-15 11:44:09.465545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.110 [2024-11-15 11:44:09.465589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.110 [2024-11-15 11:44:09.465607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.110 [2024-11-15 11:44:09.465845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.110 [2024-11-15 11:44:09.466060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.110 [2024-11-15 11:44:09.466080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.110 [2024-11-15 11:44:09.466093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.110 [2024-11-15 11:44:09.466105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.110 [2024-11-15 11:44:09.478392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.110 [2024-11-15 11:44:09.478743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.110 [2024-11-15 11:44:09.478771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.110 [2024-11-15 11:44:09.478787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.110 [2024-11-15 11:44:09.478988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.110 [2024-11-15 11:44:09.479202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.110 [2024-11-15 11:44:09.479222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.110 [2024-11-15 11:44:09.479236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.110 [2024-11-15 11:44:09.479248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.110 [2024-11-15 11:44:09.491876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.110 [2024-11-15 11:44:09.492299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.110 [2024-11-15 11:44:09.492339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.110 [2024-11-15 11:44:09.492363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.110 [2024-11-15 11:44:09.492595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.110 [2024-11-15 11:44:09.492824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.110 [2024-11-15 11:44:09.492846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.110 [2024-11-15 11:44:09.492860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.110 [2024-11-15 11:44:09.492873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.110 [2024-11-15 11:44:09.505262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.110 [2024-11-15 11:44:09.505628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.110 [2024-11-15 11:44:09.505657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.110 [2024-11-15 11:44:09.505673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.110 [2024-11-15 11:44:09.505880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.110 [2024-11-15 11:44:09.506112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.110 [2024-11-15 11:44:09.506132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.110 [2024-11-15 11:44:09.506145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.110 [2024-11-15 11:44:09.506157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.110 [2024-11-15 11:44:09.518618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.110 [2024-11-15 11:44:09.518992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.110 [2024-11-15 11:44:09.519021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.110 [2024-11-15 11:44:09.519037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.110 [2024-11-15 11:44:09.519278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.110 [2024-11-15 11:44:09.519504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.110 [2024-11-15 11:44:09.519526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.110 [2024-11-15 11:44:09.519539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.110 [2024-11-15 11:44:09.519552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.370 [2024-11-15 11:44:09.532262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.370 [2024-11-15 11:44:09.532609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.370 [2024-11-15 11:44:09.532638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.370 [2024-11-15 11:44:09.532655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.370 [2024-11-15 11:44:09.532883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.370 [2024-11-15 11:44:09.533123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.370 [2024-11-15 11:44:09.533144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.370 [2024-11-15 11:44:09.533157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.370 [2024-11-15 11:44:09.533183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.370 [2024-11-15 11:44:09.545791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.370 [2024-11-15 11:44:09.546177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.370 [2024-11-15 11:44:09.546221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.370 [2024-11-15 11:44:09.546237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.370 [2024-11-15 11:44:09.546477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.370 [2024-11-15 11:44:09.546723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.370 [2024-11-15 11:44:09.546744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.370 [2024-11-15 11:44:09.546757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.370 [2024-11-15 11:44:09.546770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.370 [2024-11-15 11:44:09.559152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.370 [2024-11-15 11:44:09.559511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.370 [2024-11-15 11:44:09.559541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.370 [2024-11-15 11:44:09.559557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.370 [2024-11-15 11:44:09.559787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.370 [2024-11-15 11:44:09.560003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.370 [2024-11-15 11:44:09.560023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.370 [2024-11-15 11:44:09.560036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.370 [2024-11-15 11:44:09.560048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.370 [2024-11-15 11:44:09.572689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.370 [2024-11-15 11:44:09.573065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.370 [2024-11-15 11:44:09.573108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.370 [2024-11-15 11:44:09.573124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.370 [2024-11-15 11:44:09.573384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.370 [2024-11-15 11:44:09.573596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.370 [2024-11-15 11:44:09.573618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.370 [2024-11-15 11:44:09.573652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.370 [2024-11-15 11:44:09.573665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.370 [2024-11-15 11:44:09.585930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.370 [2024-11-15 11:44:09.586322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.370 [2024-11-15 11:44:09.586358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.370 [2024-11-15 11:44:09.586375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.370 [2024-11-15 11:44:09.586603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.370 [2024-11-15 11:44:09.586819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.370 [2024-11-15 11:44:09.586839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.370 [2024-11-15 11:44:09.586851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.370 [2024-11-15 11:44:09.586863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.370 [2024-11-15 11:44:09.599300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.370 [2024-11-15 11:44:09.599843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.370 [2024-11-15 11:44:09.599887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.370 [2024-11-15 11:44:09.599902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.370 [2024-11-15 11:44:09.600150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.370 [2024-11-15 11:44:09.600396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.370 [2024-11-15 11:44:09.600419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.370 [2024-11-15 11:44:09.600433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.370 [2024-11-15 11:44:09.600446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.370 5491.25 IOPS, 21.45 MiB/s [2024-11-15T10:44:09.797Z] [2024-11-15 11:44:09.612502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.370 [2024-11-15 11:44:09.612905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.370 [2024-11-15 11:44:09.612955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.370 [2024-11-15 11:44:09.612972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.371 [2024-11-15 11:44:09.613243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.371 [2024-11-15 11:44:09.613468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.371 [2024-11-15 11:44:09.613489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.371 [2024-11-15 11:44:09.613502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.371 [2024-11-15 11:44:09.613514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.371 [2024-11-15 11:44:09.625773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.371 [2024-11-15 11:44:09.626271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.371 [2024-11-15 11:44:09.626332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.371 [2024-11-15 11:44:09.626348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.371 [2024-11-15 11:44:09.626597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.371 [2024-11-15 11:44:09.626806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.371 [2024-11-15 11:44:09.626826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.371 [2024-11-15 11:44:09.626838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.371 [2024-11-15 11:44:09.626850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.371 [2024-11-15 11:44:09.638872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.371 [2024-11-15 11:44:09.639283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.371 [2024-11-15 11:44:09.639344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.371 [2024-11-15 11:44:09.639360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.371 [2024-11-15 11:44:09.639622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.371 [2024-11-15 11:44:09.639815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.371 [2024-11-15 11:44:09.639834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.371 [2024-11-15 11:44:09.639847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.371 [2024-11-15 11:44:09.639858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.371 [2024-11-15 11:44:09.652113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.371 [2024-11-15 11:44:09.652439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.371 [2024-11-15 11:44:09.652506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.371 [2024-11-15 11:44:09.652543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.371 [2024-11-15 11:44:09.652766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.371 [2024-11-15 11:44:09.652959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.371 [2024-11-15 11:44:09.652978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.371 [2024-11-15 11:44:09.652990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.371 [2024-11-15 11:44:09.653002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.371 [2024-11-15 11:44:09.665216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.371 [2024-11-15 11:44:09.665652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.371 [2024-11-15 11:44:09.665708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.371 [2024-11-15 11:44:09.665729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.371 [2024-11-15 11:44:09.665992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.371 [2024-11-15 11:44:09.666185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.371 [2024-11-15 11:44:09.666204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.371 [2024-11-15 11:44:09.666216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.371 [2024-11-15 11:44:09.666228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.371 [2024-11-15 11:44:09.678180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.371 [2024-11-15 11:44:09.678609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.371 [2024-11-15 11:44:09.678664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.371 [2024-11-15 11:44:09.678679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.371 [2024-11-15 11:44:09.678906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.371 [2024-11-15 11:44:09.679099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.371 [2024-11-15 11:44:09.679118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.371 [2024-11-15 11:44:09.679130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.371 [2024-11-15 11:44:09.679141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.371 [2024-11-15 11:44:09.691214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.371 [2024-11-15 11:44:09.691654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.371 [2024-11-15 11:44:09.691710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.371 [2024-11-15 11:44:09.691726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.371 [2024-11-15 11:44:09.691987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.371 [2024-11-15 11:44:09.692180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.371 [2024-11-15 11:44:09.692199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.371 [2024-11-15 11:44:09.692211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.371 [2024-11-15 11:44:09.692223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.371 [2024-11-15 11:44:09.704382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.371 [2024-11-15 11:44:09.704784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.371 [2024-11-15 11:44:09.704850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.371 [2024-11-15 11:44:09.704865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.371 [2024-11-15 11:44:09.705115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.371 [2024-11-15 11:44:09.705354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.371 [2024-11-15 11:44:09.705375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.371 [2024-11-15 11:44:09.705388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.371 [2024-11-15 11:44:09.705400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.371 [2024-11-15 11:44:09.717480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.371 [2024-11-15 11:44:09.717858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.371 [2024-11-15 11:44:09.717899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.371 [2024-11-15 11:44:09.717916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.371 [2024-11-15 11:44:09.718136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.371 [2024-11-15 11:44:09.718371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.371 [2024-11-15 11:44:09.718407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.371 [2024-11-15 11:44:09.718420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.371 [2024-11-15 11:44:09.718433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.371 [2024-11-15 11:44:09.730515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.371 [2024-11-15 11:44:09.730844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.371 [2024-11-15 11:44:09.730869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.371 [2024-11-15 11:44:09.730884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.371 [2024-11-15 11:44:09.731078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.371 [2024-11-15 11:44:09.731310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.371 [2024-11-15 11:44:09.731331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.371 [2024-11-15 11:44:09.731343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.372 [2024-11-15 11:44:09.731370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.372 [2024-11-15 11:44:09.743596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.372 [2024-11-15 11:44:09.744002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.372 [2024-11-15 11:44:09.744032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.372 [2024-11-15 11:44:09.744049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.372 [2024-11-15 11:44:09.744289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.372 [2024-11-15 11:44:09.744557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.372 [2024-11-15 11:44:09.744580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.372 [2024-11-15 11:44:09.744601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.372 [2024-11-15 11:44:09.744615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.372 [2024-11-15 11:44:09.756987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.372 [2024-11-15 11:44:09.757366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.372 [2024-11-15 11:44:09.757395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.372 [2024-11-15 11:44:09.757412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.372 [2024-11-15 11:44:09.757654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.372 [2024-11-15 11:44:09.757852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.372 [2024-11-15 11:44:09.757873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.372 [2024-11-15 11:44:09.757886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.372 [2024-11-15 11:44:09.757897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.372 [2024-11-15 11:44:09.770015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.372 [2024-11-15 11:44:09.770513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.372 [2024-11-15 11:44:09.770555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.372 [2024-11-15 11:44:09.770573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.372 [2024-11-15 11:44:09.770838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.372 [2024-11-15 11:44:09.771051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.372 [2024-11-15 11:44:09.771071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.372 [2024-11-15 11:44:09.771085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.372 [2024-11-15 11:44:09.771096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.372 [2024-11-15 11:44:09.783048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.372 [2024-11-15 11:44:09.783478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.372 [2024-11-15 11:44:09.783522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.372 [2024-11-15 11:44:09.783539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.372 [2024-11-15 11:44:09.783780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.372 [2024-11-15 11:44:09.783988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.372 [2024-11-15 11:44:09.784007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.372 [2024-11-15 11:44:09.784019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.372 [2024-11-15 11:44:09.784031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.631 [2024-11-15 11:44:09.796480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.631 [2024-11-15 11:44:09.796985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.631 [2024-11-15 11:44:09.797038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.631 [2024-11-15 11:44:09.797054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.631 [2024-11-15 11:44:09.797326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.631 [2024-11-15 11:44:09.797546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.631 [2024-11-15 11:44:09.797567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.631 [2024-11-15 11:44:09.797581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.631 [2024-11-15 11:44:09.797593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.631 [2024-11-15 11:44:09.809569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.631 [2024-11-15 11:44:09.809980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.631 [2024-11-15 11:44:09.810007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.631 [2024-11-15 11:44:09.810037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.631 [2024-11-15 11:44:09.810270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.631 [2024-11-15 11:44:09.810511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.631 [2024-11-15 11:44:09.810533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.631 [2024-11-15 11:44:09.810547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.632 [2024-11-15 11:44:09.810559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.632 [2024-11-15 11:44:09.822680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.632 [2024-11-15 11:44:09.823043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.632 [2024-11-15 11:44:09.823084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.632 [2024-11-15 11:44:09.823100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.632 [2024-11-15 11:44:09.823360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.632 [2024-11-15 11:44:09.823581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.632 [2024-11-15 11:44:09.823616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.632 [2024-11-15 11:44:09.823630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.632 [2024-11-15 11:44:09.823642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.632 [2024-11-15 11:44:09.835671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.632 [2024-11-15 11:44:09.836170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.632 [2024-11-15 11:44:09.836212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.632 [2024-11-15 11:44:09.836235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.632 [2024-11-15 11:44:09.836520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.632 [2024-11-15 11:44:09.836751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.632 [2024-11-15 11:44:09.836771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.632 [2024-11-15 11:44:09.836783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.632 [2024-11-15 11:44:09.836795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.632 [2024-11-15 11:44:09.848647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.632 [2024-11-15 11:44:09.849023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.632 [2024-11-15 11:44:09.849075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.632 [2024-11-15 11:44:09.849109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.632 [2024-11-15 11:44:09.849354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.632 [2024-11-15 11:44:09.849553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.632 [2024-11-15 11:44:09.849573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.632 [2024-11-15 11:44:09.849585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.632 [2024-11-15 11:44:09.849597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.632 [2024-11-15 11:44:09.861755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.632 [2024-11-15 11:44:09.862148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.632 [2024-11-15 11:44:09.862176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.632 [2024-11-15 11:44:09.862191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.632 [2024-11-15 11:44:09.862423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.632 [2024-11-15 11:44:09.862644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.632 [2024-11-15 11:44:09.862678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.632 [2024-11-15 11:44:09.862690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.632 [2024-11-15 11:44:09.862701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.632 [2024-11-15 11:44:09.874794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.632 [2024-11-15 11:44:09.875156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.632 [2024-11-15 11:44:09.875183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.632 [2024-11-15 11:44:09.875198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.632 [2024-11-15 11:44:09.875443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.632 [2024-11-15 11:44:09.875669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.632 [2024-11-15 11:44:09.875704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.632 [2024-11-15 11:44:09.875717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.632 [2024-11-15 11:44:09.875729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.632 [2024-11-15 11:44:09.887897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.632 [2024-11-15 11:44:09.888260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.632 [2024-11-15 11:44:09.888310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.632 [2024-11-15 11:44:09.888329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.632 [2024-11-15 11:44:09.888581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.632 [2024-11-15 11:44:09.888791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.632 [2024-11-15 11:44:09.888810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.632 [2024-11-15 11:44:09.888823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.632 [2024-11-15 11:44:09.888834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.632 [2024-11-15 11:44:09.900907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.632 [2024-11-15 11:44:09.901240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.632 [2024-11-15 11:44:09.901268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.632 [2024-11-15 11:44:09.901284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.632 [2024-11-15 11:44:09.901550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.632 [2024-11-15 11:44:09.901763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.632 [2024-11-15 11:44:09.901783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.632 [2024-11-15 11:44:09.901795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.632 [2024-11-15 11:44:09.901807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.632 [2024-11-15 11:44:09.913932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.632 [2024-11-15 11:44:09.914360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.632 [2024-11-15 11:44:09.914388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.632 [2024-11-15 11:44:09.914403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.632 [2024-11-15 11:44:09.914666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.632 [2024-11-15 11:44:09.914859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.632 [2024-11-15 11:44:09.914878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.632 [2024-11-15 11:44:09.914895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.632 [2024-11-15 11:44:09.914908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.632 [2024-11-15 11:44:09.927006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.632 [2024-11-15 11:44:09.927373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.632 [2024-11-15 11:44:09.927417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.633 [2024-11-15 11:44:09.927433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.633 [2024-11-15 11:44:09.927684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.633 [2024-11-15 11:44:09.927892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.633 [2024-11-15 11:44:09.927911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.633 [2024-11-15 11:44:09.927923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.633 [2024-11-15 11:44:09.927934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.633 [2024-11-15 11:44:09.940035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.633 [2024-11-15 11:44:09.940464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.633 [2024-11-15 11:44:09.940493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.633 [2024-11-15 11:44:09.940509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.633 [2024-11-15 11:44:09.940749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.633 [2024-11-15 11:44:09.940957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.633 [2024-11-15 11:44:09.940977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.633 [2024-11-15 11:44:09.940989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.633 [2024-11-15 11:44:09.941001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.633 [2024-11-15 11:44:09.953149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.633 [2024-11-15 11:44:09.953601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.633 [2024-11-15 11:44:09.953644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.633 [2024-11-15 11:44:09.953660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.633 [2024-11-15 11:44:09.953912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.633 [2024-11-15 11:44:09.954120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.633 [2024-11-15 11:44:09.954139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.633 [2024-11-15 11:44:09.954152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.633 [2024-11-15 11:44:09.954163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.633 [2024-11-15 11:44:09.966204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.633 [2024-11-15 11:44:09.966594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.633 [2024-11-15 11:44:09.966639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.633 [2024-11-15 11:44:09.966655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.633 [2024-11-15 11:44:09.966888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.633 [2024-11-15 11:44:09.967096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.633 [2024-11-15 11:44:09.967115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.633 [2024-11-15 11:44:09.967128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.633 [2024-11-15 11:44:09.967139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.633 [2024-11-15 11:44:09.979191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.633 [2024-11-15 11:44:09.979520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.633 [2024-11-15 11:44:09.979563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.633 [2024-11-15 11:44:09.979579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.633 [2024-11-15 11:44:09.979801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.633 [2024-11-15 11:44:09.980010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.633 [2024-11-15 11:44:09.980029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.633 [2024-11-15 11:44:09.980041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.633 [2024-11-15 11:44:09.980053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.633 [2024-11-15 11:44:09.992278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.633 [2024-11-15 11:44:09.992662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.633 [2024-11-15 11:44:09.992690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.633 [2024-11-15 11:44:09.992720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.633 [2024-11-15 11:44:09.992941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.633 [2024-11-15 11:44:09.993150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.633 [2024-11-15 11:44:09.993169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.633 [2024-11-15 11:44:09.993182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.633 [2024-11-15 11:44:09.993193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.633 [2024-11-15 11:44:10.006487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.633 [2024-11-15 11:44:10.006854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.633 [2024-11-15 11:44:10.006885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.633 [2024-11-15 11:44:10.006909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.633 [2024-11-15 11:44:10.007132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.633 [2024-11-15 11:44:10.007382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.633 [2024-11-15 11:44:10.007406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.633 [2024-11-15 11:44:10.007421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.633 [2024-11-15 11:44:10.007435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.633 [2024-11-15 11:44:10.019781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.633 [2024-11-15 11:44:10.020114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.633 [2024-11-15 11:44:10.020159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.633 [2024-11-15 11:44:10.020175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.633 [2024-11-15 11:44:10.020415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.633 [2024-11-15 11:44:10.020636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.633 [2024-11-15 11:44:10.020657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.633 [2024-11-15 11:44:10.020671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.633 [2024-11-15 11:44:10.020684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.633 [2024-11-15 11:44:10.032855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.633 [2024-11-15 11:44:10.033265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.633 [2024-11-15 11:44:10.033292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.633 [2024-11-15 11:44:10.033335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.633 [2024-11-15 11:44:10.033566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.633 [2024-11-15 11:44:10.033798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.633 [2024-11-15 11:44:10.033818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.633 [2024-11-15 11:44:10.033831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.633 [2024-11-15 11:44:10.033843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.633 [2024-11-15 11:44:10.046320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.633 [2024-11-15 11:44:10.046816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.633 [2024-11-15 11:44:10.046860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.633 [2024-11-15 11:44:10.046877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.633 [2024-11-15 11:44:10.047126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.634 [2024-11-15 11:44:10.047409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.634 [2024-11-15 11:44:10.047431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.634 [2024-11-15 11:44:10.047445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.634 [2024-11-15 11:44:10.047457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.893 [2024-11-15 11:44:10.059604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.893 [2024-11-15 11:44:10.059940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-11-15 11:44:10.059970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.893 [2024-11-15 11:44:10.059987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.893 [2024-11-15 11:44:10.060201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.893 [2024-11-15 11:44:10.060430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.893 [2024-11-15 11:44:10.060453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.893 [2024-11-15 11:44:10.060467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.893 [2024-11-15 11:44:10.060480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.893 [2024-11-15 11:44:10.073049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.893 [2024-11-15 11:44:10.073452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-11-15 11:44:10.073483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.893 [2024-11-15 11:44:10.073499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.893 [2024-11-15 11:44:10.073741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.893 [2024-11-15 11:44:10.073941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.893 [2024-11-15 11:44:10.073961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.893 [2024-11-15 11:44:10.073974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.893 [2024-11-15 11:44:10.073986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.893 [2024-11-15 11:44:10.086476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.893 [2024-11-15 11:44:10.086928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-11-15 11:44:10.086955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.893 [2024-11-15 11:44:10.086985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.893 [2024-11-15 11:44:10.087207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.893 [2024-11-15 11:44:10.087458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.893 [2024-11-15 11:44:10.087480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.893 [2024-11-15 11:44:10.087500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.893 [2024-11-15 11:44:10.087514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.893 [2024-11-15 11:44:10.099896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.893 [2024-11-15 11:44:10.100300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-11-15 11:44:10.100375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.893 [2024-11-15 11:44:10.100391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.893 [2024-11-15 11:44:10.100619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.893 [2024-11-15 11:44:10.100862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.893 [2024-11-15 11:44:10.100881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.893 [2024-11-15 11:44:10.100894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.893 [2024-11-15 11:44:10.100905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.893 [2024-11-15 11:44:10.113178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.893 [2024-11-15 11:44:10.113577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-11-15 11:44:10.113606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.893 [2024-11-15 11:44:10.113623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.893 [2024-11-15 11:44:10.113867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.893 [2024-11-15 11:44:10.114076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.893 [2024-11-15 11:44:10.114096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.893 [2024-11-15 11:44:10.114109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.893 [2024-11-15 11:44:10.114120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.893 [2024-11-15 11:44:10.126528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.893 [2024-11-15 11:44:10.126931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-11-15 11:44:10.126959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.893 [2024-11-15 11:44:10.126974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.893 [2024-11-15 11:44:10.127195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.893 [2024-11-15 11:44:10.127432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.893 [2024-11-15 11:44:10.127453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.893 [2024-11-15 11:44:10.127466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.893 [2024-11-15 11:44:10.127478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.893 [2024-11-15 11:44:10.139868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.893 [2024-11-15 11:44:10.140361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.893 [2024-11-15 11:44:10.140416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.893 [2024-11-15 11:44:10.140434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.893 [2024-11-15 11:44:10.140684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.893 [2024-11-15 11:44:10.140911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.893 [2024-11-15 11:44:10.140931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.893 [2024-11-15 11:44:10.140945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.894 [2024-11-15 11:44:10.140957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.894 [2024-11-15 11:44:10.153267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.894 [2024-11-15 11:44:10.153734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-11-15 11:44:10.153777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.894 [2024-11-15 11:44:10.153793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.894 [2024-11-15 11:44:10.154047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.894 [2024-11-15 11:44:10.154258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.894 [2024-11-15 11:44:10.154277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.894 [2024-11-15 11:44:10.154314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.894 [2024-11-15 11:44:10.154330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.894 [2024-11-15 11:44:10.166693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.894 [2024-11-15 11:44:10.167076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-11-15 11:44:10.167104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.894 [2024-11-15 11:44:10.167120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.894 [2024-11-15 11:44:10.167374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.894 [2024-11-15 11:44:10.167579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.894 [2024-11-15 11:44:10.167613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.894 [2024-11-15 11:44:10.167626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.894 [2024-11-15 11:44:10.167638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.894 [2024-11-15 11:44:10.179943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.894 [2024-11-15 11:44:10.180279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-11-15 11:44:10.180315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.894 [2024-11-15 11:44:10.180354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.894 [2024-11-15 11:44:10.180595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.894 [2024-11-15 11:44:10.180819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.894 [2024-11-15 11:44:10.180839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.894 [2024-11-15 11:44:10.180852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.894 [2024-11-15 11:44:10.180863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.894 [2024-11-15 11:44:10.193405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.894 [2024-11-15 11:44:10.193859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-11-15 11:44:10.193887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.894 [2024-11-15 11:44:10.193903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.894 [2024-11-15 11:44:10.194132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.894 [2024-11-15 11:44:10.194387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.894 [2024-11-15 11:44:10.194409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.894 [2024-11-15 11:44:10.194423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.894 [2024-11-15 11:44:10.194434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.894 [2024-11-15 11:44:10.206782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.894 [2024-11-15 11:44:10.207231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-11-15 11:44:10.207287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.894 [2024-11-15 11:44:10.207311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.894 [2024-11-15 11:44:10.207561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.894 [2024-11-15 11:44:10.207770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.894 [2024-11-15 11:44:10.207790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.894 [2024-11-15 11:44:10.207802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.894 [2024-11-15 11:44:10.207813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.894 [2024-11-15 11:44:10.220083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.894 [2024-11-15 11:44:10.220507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-11-15 11:44:10.220536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.894 [2024-11-15 11:44:10.220552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.894 [2024-11-15 11:44:10.220781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.894 [2024-11-15 11:44:10.221034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.894 [2024-11-15 11:44:10.221055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.894 [2024-11-15 11:44:10.221068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.894 [2024-11-15 11:44:10.221080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.894 [2024-11-15 11:44:10.233471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.894 [2024-11-15 11:44:10.233906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-11-15 11:44:10.233957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.894 [2024-11-15 11:44:10.233974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.894 [2024-11-15 11:44:10.234212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.894 [2024-11-15 11:44:10.234449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.894 [2024-11-15 11:44:10.234469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.894 [2024-11-15 11:44:10.234482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.894 [2024-11-15 11:44:10.234494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.894 [2024-11-15 11:44:10.246726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.894 [2024-11-15 11:44:10.247241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-11-15 11:44:10.247296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.894 [2024-11-15 11:44:10.247321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.894 [2024-11-15 11:44:10.247565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.894 [2024-11-15 11:44:10.247792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.894 [2024-11-15 11:44:10.247811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.894 [2024-11-15 11:44:10.247823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.894 [2024-11-15 11:44:10.247835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.894 [2024-11-15 11:44:10.259860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.894 [2024-11-15 11:44:10.260209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-11-15 11:44:10.260252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.894 [2024-11-15 11:44:10.260268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.894 [2024-11-15 11:44:10.260524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.894 [2024-11-15 11:44:10.260783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.894 [2024-11-15 11:44:10.260809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.894 [2024-11-15 11:44:10.260829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.894 [2024-11-15 11:44:10.260843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.894 [2024-11-15 11:44:10.273477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.894 [2024-11-15 11:44:10.273982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.894 [2024-11-15 11:44:10.274040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.894 [2024-11-15 11:44:10.274057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.894 [2024-11-15 11:44:10.274285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.894 [2024-11-15 11:44:10.274519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.894 [2024-11-15 11:44:10.274540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.894 [2024-11-15 11:44:10.274553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.895 [2024-11-15 11:44:10.274565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.895 [2024-11-15 11:44:10.286971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.895 [2024-11-15 11:44:10.287334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-11-15 11:44:10.287364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.895 [2024-11-15 11:44:10.287381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.895 [2024-11-15 11:44:10.287613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.895 [2024-11-15 11:44:10.287822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.895 [2024-11-15 11:44:10.287842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.895 [2024-11-15 11:44:10.287854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.895 [2024-11-15 11:44:10.287865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.895 [2024-11-15 11:44:10.300297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.895 [2024-11-15 11:44:10.300741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-11-15 11:44:10.300783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.895 [2024-11-15 11:44:10.300799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.895 [2024-11-15 11:44:10.301035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.895 [2024-11-15 11:44:10.301261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.895 [2024-11-15 11:44:10.301282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.895 [2024-11-15 11:44:10.301295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.895 [2024-11-15 11:44:10.301334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:29.895 [2024-11-15 11:44:10.313737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:29.895 [2024-11-15 11:44:10.314229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.895 [2024-11-15 11:44:10.314287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:29.895 [2024-11-15 11:44:10.314314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:29.895 [2024-11-15 11:44:10.314530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:29.895 [2024-11-15 11:44:10.314762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:29.895 [2024-11-15 11:44:10.314784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:29.895 [2024-11-15 11:44:10.314798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:29.895 [2024-11-15 11:44:10.314810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.154 [2024-11-15 11:44:10.327082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.154 [2024-11-15 11:44:10.327426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.154 [2024-11-15 11:44:10.327455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.154 [2024-11-15 11:44:10.327471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.154 [2024-11-15 11:44:10.327703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.154 [2024-11-15 11:44:10.327913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.154 [2024-11-15 11:44:10.327932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.154 [2024-11-15 11:44:10.327945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.154 [2024-11-15 11:44:10.327956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.154 [2024-11-15 11:44:10.340532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.154 [2024-11-15 11:44:10.340935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.154 [2024-11-15 11:44:10.340963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.154 [2024-11-15 11:44:10.340978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.154 [2024-11-15 11:44:10.341200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.154 [2024-11-15 11:44:10.341464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.154 [2024-11-15 11:44:10.341487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.154 [2024-11-15 11:44:10.341501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.154 [2024-11-15 11:44:10.341514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.154 [2024-11-15 11:44:10.353781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.154 [2024-11-15 11:44:10.354217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.154 [2024-11-15 11:44:10.354244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.154 [2024-11-15 11:44:10.354281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.154 [2024-11-15 11:44:10.354520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.154 [2024-11-15 11:44:10.354749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.154 [2024-11-15 11:44:10.354768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.154 [2024-11-15 11:44:10.354781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.154 [2024-11-15 11:44:10.354792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.154 [2024-11-15 11:44:10.366948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.154 [2024-11-15 11:44:10.367381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.154 [2024-11-15 11:44:10.367410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.154 [2024-11-15 11:44:10.367426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.154 [2024-11-15 11:44:10.367654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.154 [2024-11-15 11:44:10.367884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.154 [2024-11-15 11:44:10.367903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.154 [2024-11-15 11:44:10.367916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.154 [2024-11-15 11:44:10.367927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.155 [2024-11-15 11:44:10.380146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.155 [2024-11-15 11:44:10.380514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.155 [2024-11-15 11:44:10.380543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.155 [2024-11-15 11:44:10.380559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.155 [2024-11-15 11:44:10.380787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.155 [2024-11-15 11:44:10.380998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.155 [2024-11-15 11:44:10.381018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.155 [2024-11-15 11:44:10.381030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.155 [2024-11-15 11:44:10.381041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.155 [2024-11-15 11:44:10.393443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.155 [2024-11-15 11:44:10.393857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.155 [2024-11-15 11:44:10.393899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.155 [2024-11-15 11:44:10.393916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.155 [2024-11-15 11:44:10.394168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.155 [2024-11-15 11:44:10.394395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.155 [2024-11-15 11:44:10.394415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.155 [2024-11-15 11:44:10.394428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.155 [2024-11-15 11:44:10.394440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.155 [2024-11-15 11:44:10.406695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.155 [2024-11-15 11:44:10.407067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.155 [2024-11-15 11:44:10.407109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.155 [2024-11-15 11:44:10.407125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.155 [2024-11-15 11:44:10.407378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.155 [2024-11-15 11:44:10.407590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.155 [2024-11-15 11:44:10.407628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.155 [2024-11-15 11:44:10.407641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.155 [2024-11-15 11:44:10.407653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.155 [2024-11-15 11:44:10.419943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.155 [2024-11-15 11:44:10.420441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.155 [2024-11-15 11:44:10.420483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.155 [2024-11-15 11:44:10.420500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.155 [2024-11-15 11:44:10.420739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.155 [2024-11-15 11:44:10.420948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.155 [2024-11-15 11:44:10.420967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.155 [2024-11-15 11:44:10.420979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.155 [2024-11-15 11:44:10.420990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.155 [2024-11-15 11:44:10.433041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.155 [2024-11-15 11:44:10.433393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.155 [2024-11-15 11:44:10.433423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.155 [2024-11-15 11:44:10.433439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.155 [2024-11-15 11:44:10.433678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.155 [2024-11-15 11:44:10.433887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.155 [2024-11-15 11:44:10.433906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.155 [2024-11-15 11:44:10.433923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.155 [2024-11-15 11:44:10.433935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.155 [2024-11-15 11:44:10.446283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.155 [2024-11-15 11:44:10.446679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.155 [2024-11-15 11:44:10.446720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.155 [2024-11-15 11:44:10.446737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.155 [2024-11-15 11:44:10.446958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.155 [2024-11-15 11:44:10.447167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.155 [2024-11-15 11:44:10.447186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.155 [2024-11-15 11:44:10.447199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.155 [2024-11-15 11:44:10.447210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.155 [2024-11-15 11:44:10.459529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.155 [2024-11-15 11:44:10.459942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.155 [2024-11-15 11:44:10.459984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.155 [2024-11-15 11:44:10.459999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.155 [2024-11-15 11:44:10.460248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.155 [2024-11-15 11:44:10.460500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.155 [2024-11-15 11:44:10.460523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.155 [2024-11-15 11:44:10.460538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.155 [2024-11-15 11:44:10.460550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.155 [2024-11-15 11:44:10.472859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.155 [2024-11-15 11:44:10.473359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.156 [2024-11-15 11:44:10.473386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.156 [2024-11-15 11:44:10.473402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.156 [2024-11-15 11:44:10.473672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.156 [2024-11-15 11:44:10.473885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.156 [2024-11-15 11:44:10.473905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.156 [2024-11-15 11:44:10.473917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.156 [2024-11-15 11:44:10.473929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.156 [2024-11-15 11:44:10.486147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.156 [2024-11-15 11:44:10.486834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.156 [2024-11-15 11:44:10.486874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.156 [2024-11-15 11:44:10.486916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.156 [2024-11-15 11:44:10.487144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.156 [2024-11-15 11:44:10.487371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.156 [2024-11-15 11:44:10.487393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.156 [2024-11-15 11:44:10.487406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.156 [2024-11-15 11:44:10.487418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.156 [2024-11-15 11:44:10.499391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.156 [2024-11-15 11:44:10.499752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.156 [2024-11-15 11:44:10.499827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.156 [2024-11-15 11:44:10.499860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.156 [2024-11-15 11:44:10.500096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.156 [2024-11-15 11:44:10.500315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.156 [2024-11-15 11:44:10.500336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.156 [2024-11-15 11:44:10.500364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.156 [2024-11-15 11:44:10.500377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.156 [2024-11-15 11:44:10.512585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.156 [2024-11-15 11:44:10.513071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.156 [2024-11-15 11:44:10.513104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.156 [2024-11-15 11:44:10.513121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.156 [2024-11-15 11:44:10.513382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.156 [2024-11-15 11:44:10.513638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.156 [2024-11-15 11:44:10.513676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.156 [2024-11-15 11:44:10.513691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.156 [2024-11-15 11:44:10.513705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.156 [2024-11-15 11:44:10.525894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.156 [2024-11-15 11:44:10.526270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.156 [2024-11-15 11:44:10.526309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.156 [2024-11-15 11:44:10.526337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.156 [2024-11-15 11:44:10.526567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.156 [2024-11-15 11:44:10.526799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.156 [2024-11-15 11:44:10.526819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.156 [2024-11-15 11:44:10.526832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.156 [2024-11-15 11:44:10.526845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.156 [2024-11-15 11:44:10.539497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.156 [2024-11-15 11:44:10.539885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.156 [2024-11-15 11:44:10.539915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.156 [2024-11-15 11:44:10.539932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.156 [2024-11-15 11:44:10.540160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.156 [2024-11-15 11:44:10.540428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.156 [2024-11-15 11:44:10.540452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.156 [2024-11-15 11:44:10.540466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.156 [2024-11-15 11:44:10.540479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.156 [2024-11-15 11:44:10.553179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.156 [2024-11-15 11:44:10.553510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.156 [2024-11-15 11:44:10.553539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.156 [2024-11-15 11:44:10.553556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.156 [2024-11-15 11:44:10.553797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.156 [2024-11-15 11:44:10.554012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.156 [2024-11-15 11:44:10.554031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.156 [2024-11-15 11:44:10.554044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.156 [2024-11-15 11:44:10.554055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.156 [2024-11-15 11:44:10.566671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.157 [2024-11-15 11:44:10.567060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.157 [2024-11-15 11:44:10.567104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.157 [2024-11-15 11:44:10.567119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.157 [2024-11-15 11:44:10.567395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.157 [2024-11-15 11:44:10.567621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.157 [2024-11-15 11:44:10.567656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.157 [2024-11-15 11:44:10.567668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.157 [2024-11-15 11:44:10.567680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.416 [2024-11-15 11:44:10.580146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.417 [2024-11-15 11:44:10.580567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.417 [2024-11-15 11:44:10.580619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.417 [2024-11-15 11:44:10.580637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.417 [2024-11-15 11:44:10.580884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.417 [2024-11-15 11:44:10.581076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.417 [2024-11-15 11:44:10.581095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.417 [2024-11-15 11:44:10.581108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.417 [2024-11-15 11:44:10.581120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.417 [2024-11-15 11:44:10.593402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.417 [2024-11-15 11:44:10.593791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.417 [2024-11-15 11:44:10.593829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.417 [2024-11-15 11:44:10.593862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.417 [2024-11-15 11:44:10.594089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.417 [2024-11-15 11:44:10.594297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.417 [2024-11-15 11:44:10.594327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.417 [2024-11-15 11:44:10.594341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.417 [2024-11-15 11:44:10.594353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.417 4393.00 IOPS, 17.16 MiB/s [2024-11-15T10:44:10.844Z] [2024-11-15 11:44:10.608113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.417 [2024-11-15 11:44:10.608491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.417 [2024-11-15 11:44:10.608535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.417 [2024-11-15 11:44:10.608551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.417 [2024-11-15 11:44:10.608786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.417 [2024-11-15 11:44:10.608985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.417 [2024-11-15 11:44:10.609005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.417 [2024-11-15 11:44:10.609024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.417 [2024-11-15 11:44:10.609038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.417 [2024-11-15 11:44:10.621668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.417 [2024-11-15 11:44:10.622077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.417 [2024-11-15 11:44:10.622106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.417 [2024-11-15 11:44:10.622140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.417 [2024-11-15 11:44:10.622364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.417 [2024-11-15 11:44:10.622598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.417 [2024-11-15 11:44:10.622619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.417 [2024-11-15 11:44:10.622632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.417 [2024-11-15 11:44:10.622645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.417 [2024-11-15 11:44:10.634959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.417 [2024-11-15 11:44:10.635341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.417 [2024-11-15 11:44:10.635373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.417 [2024-11-15 11:44:10.635389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.417 [2024-11-15 11:44:10.635617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.417 [2024-11-15 11:44:10.635839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.417 [2024-11-15 11:44:10.635860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.417 [2024-11-15 11:44:10.635873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.417 [2024-11-15 11:44:10.635884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.417 [2024-11-15 11:44:10.648250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.417 [2024-11-15 11:44:10.648615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.417 [2024-11-15 11:44:10.648644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.417 [2024-11-15 11:44:10.648660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.417 [2024-11-15 11:44:10.648891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.417 [2024-11-15 11:44:10.649106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.417 [2024-11-15 11:44:10.649127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.417 [2024-11-15 11:44:10.649139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.417 [2024-11-15 11:44:10.649151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.417 [2024-11-15 11:44:10.661632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.417 [2024-11-15 11:44:10.662060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.417 [2024-11-15 11:44:10.662101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.417 [2024-11-15 11:44:10.662116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.417 [2024-11-15 11:44:10.662376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.417 [2024-11-15 11:44:10.662581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.417 [2024-11-15 11:44:10.662602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.417 [2024-11-15 11:44:10.662615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.417 [2024-11-15 11:44:10.662642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.417 [2024-11-15 11:44:10.674935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.417 [2024-11-15 11:44:10.675270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.417 [2024-11-15 11:44:10.675298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.417 [2024-11-15 11:44:10.675342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.417 [2024-11-15 11:44:10.675571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.417 [2024-11-15 11:44:10.675799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.417 [2024-11-15 11:44:10.675818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.417 [2024-11-15 11:44:10.675830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.417 [2024-11-15 11:44:10.675841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.418 [2024-11-15 11:44:10.688127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.418 [2024-11-15 11:44:10.688502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.418 [2024-11-15 11:44:10.688531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.418 [2024-11-15 11:44:10.688547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.418 [2024-11-15 11:44:10.688775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.418 [2024-11-15 11:44:10.689002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.418 [2024-11-15 11:44:10.689023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.418 [2024-11-15 11:44:10.689036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.418 [2024-11-15 11:44:10.689063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.418 [2024-11-15 11:44:10.701520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.418 [2024-11-15 11:44:10.701920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.418 [2024-11-15 11:44:10.701964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.418 [2024-11-15 11:44:10.701989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.418 [2024-11-15 11:44:10.702243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.418 [2024-11-15 11:44:10.702484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.418 [2024-11-15 11:44:10.702507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.418 [2024-11-15 11:44:10.702522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.418 [2024-11-15 11:44:10.702534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.418 [2024-11-15 11:44:10.714834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.418 [2024-11-15 11:44:10.715175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.418 [2024-11-15 11:44:10.715203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.418 [2024-11-15 11:44:10.715218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.418 [2024-11-15 11:44:10.715458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.418 [2024-11-15 11:44:10.715684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.418 [2024-11-15 11:44:10.715704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.418 [2024-11-15 11:44:10.715717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.418 [2024-11-15 11:44:10.715729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.418 [2024-11-15 11:44:10.728034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.418 [2024-11-15 11:44:10.728402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.418 [2024-11-15 11:44:10.728430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.418 [2024-11-15 11:44:10.728447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.418 [2024-11-15 11:44:10.728688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.418 [2024-11-15 11:44:10.728881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.418 [2024-11-15 11:44:10.728900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.418 [2024-11-15 11:44:10.728912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.418 [2024-11-15 11:44:10.728923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.418 [2024-11-15 11:44:10.741383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.418 [2024-11-15 11:44:10.741801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.418 [2024-11-15 11:44:10.741842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.418 [2024-11-15 11:44:10.741858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.418 [2024-11-15 11:44:10.742094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.418 [2024-11-15 11:44:10.742318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.418 [2024-11-15 11:44:10.742339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.418 [2024-11-15 11:44:10.742368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.418 [2024-11-15 11:44:10.742381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.418 [2024-11-15 11:44:10.754771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.418 [2024-11-15 11:44:10.755198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.418 [2024-11-15 11:44:10.755239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.418 [2024-11-15 11:44:10.755256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.418 [2024-11-15 11:44:10.755495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.418 [2024-11-15 11:44:10.755727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.418 [2024-11-15 11:44:10.755747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.418 [2024-11-15 11:44:10.755759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.418 [2024-11-15 11:44:10.755770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.418 [2024-11-15 11:44:10.767967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.418 [2024-11-15 11:44:10.768412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.418 [2024-11-15 11:44:10.768444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.418 [2024-11-15 11:44:10.768461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.418 [2024-11-15 11:44:10.768674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.418 [2024-11-15 11:44:10.768908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.418 [2024-11-15 11:44:10.768945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.418 [2024-11-15 11:44:10.768958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.418 [2024-11-15 11:44:10.768970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.418 [2024-11-15 11:44:10.781187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.418 [2024-11-15 11:44:10.781556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.418 [2024-11-15 11:44:10.781587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.418 [2024-11-15 11:44:10.781618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.418 [2024-11-15 11:44:10.781854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.418 [2024-11-15 11:44:10.782047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.418 [2024-11-15 11:44:10.782066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.418 [2024-11-15 11:44:10.782083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.418 [2024-11-15 11:44:10.782095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.418 [2024-11-15 11:44:10.794467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.418 [2024-11-15 11:44:10.794853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.418 [2024-11-15 11:44:10.794896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.418 [2024-11-15 11:44:10.794912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.418 [2024-11-15 11:44:10.795160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.418 [2024-11-15 11:44:10.795414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.418 [2024-11-15 11:44:10.795435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.418 [2024-11-15 11:44:10.795449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.418 [2024-11-15 11:44:10.795461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.419 [2024-11-15 11:44:10.807529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.419 [2024-11-15 11:44:10.807957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.419 [2024-11-15 11:44:10.807985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.419 [2024-11-15 11:44:10.808001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.419 [2024-11-15 11:44:10.808236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.419 [2024-11-15 11:44:10.808476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.419 [2024-11-15 11:44:10.808497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.419 [2024-11-15 11:44:10.808511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.419 [2024-11-15 11:44:10.808523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.419 [2024-11-15 11:44:10.820561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.419 [2024-11-15 11:44:10.820894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.419 [2024-11-15 11:44:10.820922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.419 [2024-11-15 11:44:10.820937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.419 [2024-11-15 11:44:10.821160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.419 [2024-11-15 11:44:10.821396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.419 [2024-11-15 11:44:10.821416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.419 [2024-11-15 11:44:10.821429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.419 [2024-11-15 11:44:10.821441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.419 [2024-11-15 11:44:10.833516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.419 [2024-11-15 11:44:10.833938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.419 [2024-11-15 11:44:10.833966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.419 [2024-11-15 11:44:10.833982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.419 [2024-11-15 11:44:10.834216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.419 [2024-11-15 11:44:10.834453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.419 [2024-11-15 11:44:10.834474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.419 [2024-11-15 11:44:10.834487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.419 [2024-11-15 11:44:10.834499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.679 [2024-11-15 11:44:10.846957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.679 [2024-11-15 11:44:10.847351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.679 [2024-11-15 11:44:10.847380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.679 [2024-11-15 11:44:10.847396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.679 [2024-11-15 11:44:10.847625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.679 [2024-11-15 11:44:10.847850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.679 [2024-11-15 11:44:10.847870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.679 [2024-11-15 11:44:10.847882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.679 [2024-11-15 11:44:10.847893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.679 [2024-11-15 11:44:10.860067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.679 [2024-11-15 11:44:10.860434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.679 [2024-11-15 11:44:10.860462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.679 [2024-11-15 11:44:10.860478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.679 [2024-11-15 11:44:10.860713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.679 [2024-11-15 11:44:10.860922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.679 [2024-11-15 11:44:10.860941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.679 [2024-11-15 11:44:10.860954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.679 [2024-11-15 11:44:10.860965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.679 [2024-11-15 11:44:10.873166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.679 [2024-11-15 11:44:10.873611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.679 [2024-11-15 11:44:10.873654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.679 [2024-11-15 11:44:10.873677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.679 [2024-11-15 11:44:10.873918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.679 [2024-11-15 11:44:10.874126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.679 [2024-11-15 11:44:10.874145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.679 [2024-11-15 11:44:10.874157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.679 [2024-11-15 11:44:10.874168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.679 [2024-11-15 11:44:10.886292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.679 [2024-11-15 11:44:10.886773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.679 [2024-11-15 11:44:10.886827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.679 [2024-11-15 11:44:10.886842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.679 [2024-11-15 11:44:10.887102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.679 [2024-11-15 11:44:10.887294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.679 [2024-11-15 11:44:10.887338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.679 [2024-11-15 11:44:10.887351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.679 [2024-11-15 11:44:10.887364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.679 [2024-11-15 11:44:10.899370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.679 [2024-11-15 11:44:10.899746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.679 [2024-11-15 11:44:10.899833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.679 [2024-11-15 11:44:10.899848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.679 [2024-11-15 11:44:10.900088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.679 [2024-11-15 11:44:10.900295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.679 [2024-11-15 11:44:10.900325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.679 [2024-11-15 11:44:10.900338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.679 [2024-11-15 11:44:10.900366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.679 [2024-11-15 11:44:10.912479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.679 [2024-11-15 11:44:10.912846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.679 [2024-11-15 11:44:10.912888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.679 [2024-11-15 11:44:10.912904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.679 [2024-11-15 11:44:10.913157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.679 [2024-11-15 11:44:10.913411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.679 [2024-11-15 11:44:10.913432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.679 [2024-11-15 11:44:10.913445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.679 [2024-11-15 11:44:10.913457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.680 [2024-11-15 11:44:10.925566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.680 [2024-11-15 11:44:10.925928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.680 [2024-11-15 11:44:10.925956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.680 [2024-11-15 11:44:10.925972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.680 [2024-11-15 11:44:10.926207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.680 [2024-11-15 11:44:10.926444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.680 [2024-11-15 11:44:10.926465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.680 [2024-11-15 11:44:10.926478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.680 [2024-11-15 11:44:10.926490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.680 [2024-11-15 11:44:10.938527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.680 [2024-11-15 11:44:10.939016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.680 [2024-11-15 11:44:10.939058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.680 [2024-11-15 11:44:10.939075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.680 [2024-11-15 11:44:10.939348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.680 [2024-11-15 11:44:10.939554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.680 [2024-11-15 11:44:10.939574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.680 [2024-11-15 11:44:10.939587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.680 [2024-11-15 11:44:10.939613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.680 [2024-11-15 11:44:10.951509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.680 [2024-11-15 11:44:10.951870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.680 [2024-11-15 11:44:10.951899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.680 [2024-11-15 11:44:10.951915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.680 [2024-11-15 11:44:10.952142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.680 [2024-11-15 11:44:10.952399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.680 [2024-11-15 11:44:10.952421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.680 [2024-11-15 11:44:10.952440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.680 [2024-11-15 11:44:10.952453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.680 [2024-11-15 11:44:10.964565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.680 [2024-11-15 11:44:10.964950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.680 [2024-11-15 11:44:10.964991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.680 [2024-11-15 11:44:10.965006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.680 [2024-11-15 11:44:10.965254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.680 [2024-11-15 11:44:10.965496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.680 [2024-11-15 11:44:10.965517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.680 [2024-11-15 11:44:10.965531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.680 [2024-11-15 11:44:10.965543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.680 [2024-11-15 11:44:10.977570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.680 [2024-11-15 11:44:10.977943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.680 [2024-11-15 11:44:10.977987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.680 [2024-11-15 11:44:10.978003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.680 [2024-11-15 11:44:10.978272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.680 [2024-11-15 11:44:10.978498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.680 [2024-11-15 11:44:10.978519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.680 [2024-11-15 11:44:10.978532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.680 [2024-11-15 11:44:10.978545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.680 [2024-11-15 11:44:10.990717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.680 [2024-11-15 11:44:10.991210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.680 [2024-11-15 11:44:10.991238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.680 [2024-11-15 11:44:10.991269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.680 [2024-11-15 11:44:10.991509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.680 [2024-11-15 11:44:10.991754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.680 [2024-11-15 11:44:10.991773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.680 [2024-11-15 11:44:10.991786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.680 [2024-11-15 11:44:10.991797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.680 [2024-11-15 11:44:11.003795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.680 [2024-11-15 11:44:11.004199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.680 [2024-11-15 11:44:11.004226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.680 [2024-11-15 11:44:11.004241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.680 [2024-11-15 11:44:11.004523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.680 [2024-11-15 11:44:11.004753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.680 [2024-11-15 11:44:11.004773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.680 [2024-11-15 11:44:11.004785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.680 [2024-11-15 11:44:11.004812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.680 [2024-11-15 11:44:11.016946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.680 [2024-11-15 11:44:11.017314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.680 [2024-11-15 11:44:11.017342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.680 [2024-11-15 11:44:11.017358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.680 [2024-11-15 11:44:11.017593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.680 [2024-11-15 11:44:11.017801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.680 [2024-11-15 11:44:11.017821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.680 [2024-11-15 11:44:11.017833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.680 [2024-11-15 11:44:11.017844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.680 [2024-11-15 11:44:11.029977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.680 [2024-11-15 11:44:11.030358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.680 [2024-11-15 11:44:11.030403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.680 [2024-11-15 11:44:11.030420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.680 [2024-11-15 11:44:11.030657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.680 [2024-11-15 11:44:11.030879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.680 [2024-11-15 11:44:11.030900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.680 [2024-11-15 11:44:11.030914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.680 [2024-11-15 11:44:11.030926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.680 [2024-11-15 11:44:11.043167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.680 [2024-11-15 11:44:11.043543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.680 [2024-11-15 11:44:11.043572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.680 [2024-11-15 11:44:11.043595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.680 [2024-11-15 11:44:11.043833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.680 [2024-11-15 11:44:11.044042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.680 [2024-11-15 11:44:11.044061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.681 [2024-11-15 11:44:11.044074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.681 [2024-11-15 11:44:11.044086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.681 [2024-11-15 11:44:11.056171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.681 [2024-11-15 11:44:11.056547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.681 [2024-11-15 11:44:11.056590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.681 [2024-11-15 11:44:11.056606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.681 [2024-11-15 11:44:11.056859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.681 [2024-11-15 11:44:11.057067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.681 [2024-11-15 11:44:11.057086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.681 [2024-11-15 11:44:11.057099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.681 [2024-11-15 11:44:11.057110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.681 [2024-11-15 11:44:11.069227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.681 [2024-11-15 11:44:11.069637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.681 [2024-11-15 11:44:11.069679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.681 [2024-11-15 11:44:11.069694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.681 [2024-11-15 11:44:11.069933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.681 [2024-11-15 11:44:11.070141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.681 [2024-11-15 11:44:11.070160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.681 [2024-11-15 11:44:11.070173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.681 [2024-11-15 11:44:11.070184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.681 [2024-11-15 11:44:11.082356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.681 [2024-11-15 11:44:11.082727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.681 [2024-11-15 11:44:11.082754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.681 [2024-11-15 11:44:11.082769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.681 [2024-11-15 11:44:11.082984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.681 [2024-11-15 11:44:11.083196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.681 [2024-11-15 11:44:11.083215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.681 [2024-11-15 11:44:11.083228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.681 [2024-11-15 11:44:11.083239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.681 [2024-11-15 11:44:11.095553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.681 [2024-11-15 11:44:11.095914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.681 [2024-11-15 11:44:11.095956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.681 [2024-11-15 11:44:11.095971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.681 [2024-11-15 11:44:11.096212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.681 [2024-11-15 11:44:11.096433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.681 [2024-11-15 11:44:11.096454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.681 [2024-11-15 11:44:11.096467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.681 [2024-11-15 11:44:11.096479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.942 [2024-11-15 11:44:11.109198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.942 [2024-11-15 11:44:11.109635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-11-15 11:44:11.109678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.942 [2024-11-15 11:44:11.109695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.942 [2024-11-15 11:44:11.109947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.942 [2024-11-15 11:44:11.110139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.942 [2024-11-15 11:44:11.110158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.942 [2024-11-15 11:44:11.110171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.942 [2024-11-15 11:44:11.110182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.942 [2024-11-15 11:44:11.122260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.942 [2024-11-15 11:44:11.122595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-11-15 11:44:11.122623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.942 [2024-11-15 11:44:11.122639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.942 [2024-11-15 11:44:11.122861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.942 [2024-11-15 11:44:11.123090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.942 [2024-11-15 11:44:11.123109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.942 [2024-11-15 11:44:11.123127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.942 [2024-11-15 11:44:11.123140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.942 [2024-11-15 11:44:11.135338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.942 [2024-11-15 11:44:11.135702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-11-15 11:44:11.135730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.942 [2024-11-15 11:44:11.135761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.942 [2024-11-15 11:44:11.136014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.942 [2024-11-15 11:44:11.136207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.942 [2024-11-15 11:44:11.136226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.942 [2024-11-15 11:44:11.136238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.942 [2024-11-15 11:44:11.136249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.942 [2024-11-15 11:44:11.148329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.942 [2024-11-15 11:44:11.148692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-11-15 11:44:11.148733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.942 [2024-11-15 11:44:11.148749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.942 [2024-11-15 11:44:11.148995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.942 [2024-11-15 11:44:11.149203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.942 [2024-11-15 11:44:11.149222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.942 [2024-11-15 11:44:11.149235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.942 [2024-11-15 11:44:11.149246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.942 [2024-11-15 11:44:11.161394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.942 [2024-11-15 11:44:11.161723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-11-15 11:44:11.161750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.942 [2024-11-15 11:44:11.161765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.942 [2024-11-15 11:44:11.161988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.942 [2024-11-15 11:44:11.162196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.942 [2024-11-15 11:44:11.162215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.942 [2024-11-15 11:44:11.162228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.942 [2024-11-15 11:44:11.162239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.942 [2024-11-15 11:44:11.174471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.942 [2024-11-15 11:44:11.174801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-11-15 11:44:11.174830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.942 [2024-11-15 11:44:11.174845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.942 [2024-11-15 11:44:11.175067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.942 [2024-11-15 11:44:11.175276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.942 [2024-11-15 11:44:11.175295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.942 [2024-11-15 11:44:11.175333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.942 [2024-11-15 11:44:11.175347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.942 [2024-11-15 11:44:11.187605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.942 [2024-11-15 11:44:11.188042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-11-15 11:44:11.188085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.942 [2024-11-15 11:44:11.188102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.942 [2024-11-15 11:44:11.188352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.942 [2024-11-15 11:44:11.188550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.942 [2024-11-15 11:44:11.188570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.942 [2024-11-15 11:44:11.188583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.942 [2024-11-15 11:44:11.188595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.942 [2024-11-15 11:44:11.200636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.942 [2024-11-15 11:44:11.201060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.942 [2024-11-15 11:44:11.201103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.942 [2024-11-15 11:44:11.201119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.942 [2024-11-15 11:44:11.201398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.942 [2024-11-15 11:44:11.201604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.942 [2024-11-15 11:44:11.201625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.942 [2024-11-15 11:44:11.201638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.942 [2024-11-15 11:44:11.201650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.942 [2024-11-15 11:44:11.213638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.942 [2024-11-15 11:44:11.214003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-11-15 11:44:11.214031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.943 [2024-11-15 11:44:11.214052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.943 [2024-11-15 11:44:11.214295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3035809 Killed "${NVMF_APP[@]}" "$@" 00:25:30.943 [2024-11-15 11:44:11.214538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.943 [2024-11-15 11:44:11.214560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.943 [2024-11-15 11:44:11.214574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.943 [2024-11-15 11:44:11.214602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.943 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:30.943 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:30.943 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:30.943 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:30.943 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.943 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3036762 00:25:30.943 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:30.943 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3036762 00:25:30.943 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3036762 ']' 00:25:30.943 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.943 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.943 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.943 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.943 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.943 [2024-11-15 11:44:11.227025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.943 [2024-11-15 11:44:11.227355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-11-15 11:44:11.227384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.943 [2024-11-15 11:44:11.227401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.943 [2024-11-15 11:44:11.227615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.943 [2024-11-15 11:44:11.227846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.943 [2024-11-15 11:44:11.227866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.943 [2024-11-15 11:44:11.227879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.943 [2024-11-15 11:44:11.227891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.943 [2024-11-15 11:44:11.240422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.943 [2024-11-15 11:44:11.240817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-11-15 11:44:11.240845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.943 [2024-11-15 11:44:11.240861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.943 [2024-11-15 11:44:11.241091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.943 [2024-11-15 11:44:11.241332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.943 [2024-11-15 11:44:11.241368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.943 [2024-11-15 11:44:11.241383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.943 [2024-11-15 11:44:11.241395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.943 [2024-11-15 11:44:11.253908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.943 [2024-11-15 11:44:11.254230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-11-15 11:44:11.254258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.943 [2024-11-15 11:44:11.254273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.943 [2024-11-15 11:44:11.254539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.943 [2024-11-15 11:44:11.254773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.943 [2024-11-15 11:44:11.254793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.943 [2024-11-15 11:44:11.254806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.943 [2024-11-15 11:44:11.254818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.943 [2024-11-15 11:44:11.264913] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:25:30.943 [2024-11-15 11:44:11.264965] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.943 [2024-11-15 11:44:11.267246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.943 [2024-11-15 11:44:11.267648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-11-15 11:44:11.267677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.943 [2024-11-15 11:44:11.267693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.943 [2024-11-15 11:44:11.267936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.943 [2024-11-15 11:44:11.268151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.943 [2024-11-15 11:44:11.268171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.943 [2024-11-15 11:44:11.268184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.943 [2024-11-15 11:44:11.268196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.943 [2024-11-15 11:44:11.280734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.943 [2024-11-15 11:44:11.281151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-11-15 11:44:11.281181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.943 [2024-11-15 11:44:11.281198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.943 [2024-11-15 11:44:11.281442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.943 [2024-11-15 11:44:11.281679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.943 [2024-11-15 11:44:11.281701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.943 [2024-11-15 11:44:11.281715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.943 [2024-11-15 11:44:11.281743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.943 [2024-11-15 11:44:11.293974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.943 [2024-11-15 11:44:11.294353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-11-15 11:44:11.294383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.943 [2024-11-15 11:44:11.294400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.943 [2024-11-15 11:44:11.294641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.943 [2024-11-15 11:44:11.294855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.943 [2024-11-15 11:44:11.294876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.943 [2024-11-15 11:44:11.294889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.943 [2024-11-15 11:44:11.294901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.943 [2024-11-15 11:44:11.307361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.943 [2024-11-15 11:44:11.307762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.943 [2024-11-15 11:44:11.307791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.943 [2024-11-15 11:44:11.307808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.943 [2024-11-15 11:44:11.308037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.944 [2024-11-15 11:44:11.308253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.944 [2024-11-15 11:44:11.308273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.944 [2024-11-15 11:44:11.308310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.944 [2024-11-15 11:44:11.308327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.944 [2024-11-15 11:44:11.320687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.944 [2024-11-15 11:44:11.321121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-11-15 11:44:11.321150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.944 [2024-11-15 11:44:11.321175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.944 [2024-11-15 11:44:11.321415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.944 [2024-11-15 11:44:11.321637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.944 [2024-11-15 11:44:11.321657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.944 [2024-11-15 11:44:11.321669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.944 [2024-11-15 11:44:11.321681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.944 [2024-11-15 11:44:11.333921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.944 [2024-11-15 11:44:11.334359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-11-15 11:44:11.334388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.944 [2024-11-15 11:44:11.334404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.944 [2024-11-15 11:44:11.334637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.944 [2024-11-15 11:44:11.334853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.944 [2024-11-15 11:44:11.334874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.944 [2024-11-15 11:44:11.334887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.944 [2024-11-15 11:44:11.334898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.944 [2024-11-15 11:44:11.340297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:30.944 [2024-11-15 11:44:11.347100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.944 [2024-11-15 11:44:11.347600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-11-15 11:44:11.347630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.944 [2024-11-15 11:44:11.347662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.944 [2024-11-15 11:44:11.347911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.944 [2024-11-15 11:44:11.348128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.944 [2024-11-15 11:44:11.348148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.944 [2024-11-15 11:44:11.348162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.944 [2024-11-15 11:44:11.348175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:30.944 [2024-11-15 11:44:11.360499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:30.944 [2024-11-15 11:44:11.360941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.944 [2024-11-15 11:44:11.360977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:30.944 [2024-11-15 11:44:11.360996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:30.944 [2024-11-15 11:44:11.361246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:30.944 [2024-11-15 11:44:11.361518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:30.944 [2024-11-15 11:44:11.361542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:30.944 [2024-11-15 11:44:11.361557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:30.944 [2024-11-15 11:44:11.361571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.204 [2024-11-15 11:44:11.374020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.204 [2024-11-15 11:44:11.374462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.204 [2024-11-15 11:44:11.374491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:31.204 [2024-11-15 11:44:11.374507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:31.204 [2024-11-15 11:44:11.374751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:31.204 [2024-11-15 11:44:11.374950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.204 [2024-11-15 11:44:11.374970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.204 [2024-11-15 11:44:11.374983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.204 [2024-11-15 11:44:11.374995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.204 [2024-11-15 11:44:11.387220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.204 [2024-11-15 11:44:11.387674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.204 [2024-11-15 11:44:11.387703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:31.204 [2024-11-15 11:44:11.387720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:31.204 [2024-11-15 11:44:11.387950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:31.204 [2024-11-15 11:44:11.388165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.204 [2024-11-15 11:44:11.388185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.204 [2024-11-15 11:44:11.388197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.204 [2024-11-15 11:44:11.388209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.204 [2024-11-15 11:44:11.398930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.204 [2024-11-15 11:44:11.398961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.204 [2024-11-15 11:44:11.398989] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.204 [2024-11-15 11:44:11.399000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.204 [2024-11-15 11:44:11.399010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.204 [2024-11-15 11:44:11.400393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.204 [2024-11-15 11:44:11.400421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:31.204 [2024-11-15 11:44:11.400426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.204 [2024-11-15 11:44:11.400575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.204 [2024-11-15 11:44:11.400944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.204 [2024-11-15 11:44:11.400973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:31.204 [2024-11-15 11:44:11.400990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:31.204 [2024-11-15 11:44:11.401204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:31.204 [2024-11-15 11:44:11.401461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.204 [2024-11-15 11:44:11.401484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.204 [2024-11-15 11:44:11.401499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.204 [2024-11-15 11:44:11.401513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.204 [2024-11-15 11:44:11.414092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.204 [2024-11-15 11:44:11.414611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.204 [2024-11-15 11:44:11.414650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:31.204 [2024-11-15 11:44:11.414670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:31.204 [2024-11-15 11:44:11.414906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:31.204 [2024-11-15 11:44:11.415122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.204 [2024-11-15 11:44:11.415145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.204 [2024-11-15 11:44:11.415161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.204 [2024-11-15 11:44:11.415176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.204 [2024-11-15 11:44:11.427686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.204 [2024-11-15 11:44:11.428198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.204 [2024-11-15 11:44:11.428236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:31.205 [2024-11-15 11:44:11.428256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:31.205 [2024-11-15 11:44:11.428488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:31.205 [2024-11-15 11:44:11.428722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.205 [2024-11-15 11:44:11.428744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.205 [2024-11-15 11:44:11.428760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.205 [2024-11-15 11:44:11.428776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.205 [2024-11-15 11:44:11.441200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.205 [2024-11-15 11:44:11.441720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.205 [2024-11-15 11:44:11.441759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:31.205 [2024-11-15 11:44:11.441791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:31.205 [2024-11-15 11:44:11.442027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:31.205 [2024-11-15 11:44:11.442242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.205 [2024-11-15 11:44:11.442264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.205 [2024-11-15 11:44:11.442280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.205 [2024-11-15 11:44:11.442294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.205 [2024-11-15 11:44:11.454817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.205 [2024-11-15 11:44:11.455255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.205 [2024-11-15 11:44:11.455290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:31.205 [2024-11-15 11:44:11.455318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:31.205 [2024-11-15 11:44:11.455540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:31.205 [2024-11-15 11:44:11.455773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.205 [2024-11-15 11:44:11.455797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.205 [2024-11-15 11:44:11.455812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.205 [2024-11-15 11:44:11.455826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.205 [2024-11-15 11:44:11.468282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.205 [2024-11-15 11:44:11.468817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.205 [2024-11-15 11:44:11.468856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:31.205 [2024-11-15 11:44:11.468875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:31.205 [2024-11-15 11:44:11.469112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:31.205 [2024-11-15 11:44:11.469364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.205 [2024-11-15 11:44:11.469387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.205 [2024-11-15 11:44:11.469404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.205 [2024-11-15 11:44:11.469420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.205 [2024-11-15 11:44:11.481907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.205 [2024-11-15 11:44:11.482349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.205 [2024-11-15 11:44:11.482383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:31.205 [2024-11-15 11:44:11.482403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:31.205 [2024-11-15 11:44:11.482639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:31.205 [2024-11-15 11:44:11.482862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.205 [2024-11-15 11:44:11.482883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.205 [2024-11-15 11:44:11.482898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.205 [2024-11-15 11:44:11.482913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.205 [2024-11-15 11:44:11.495401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.205 [2024-11-15 11:44:11.495794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.205 [2024-11-15 11:44:11.495822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:31.205 [2024-11-15 11:44:11.495839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:31.205 [2024-11-15 11:44:11.496054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:31.205 [2024-11-15 11:44:11.496281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.205 [2024-11-15 11:44:11.496328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.205 [2024-11-15 11:44:11.496342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.205 [2024-11-15 11:44:11.496356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.205 [2024-11-15 11:44:11.508917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.205 [2024-11-15 11:44:11.509272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.205 [2024-11-15 11:44:11.509301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:31.205 [2024-11-15 11:44:11.509326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:31.205 [2024-11-15 11:44:11.509539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:31.205 [2024-11-15 11:44:11.509758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.205 [2024-11-15 11:44:11.509780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.205 [2024-11-15 11:44:11.509794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.205 [2024-11-15 11:44:11.509806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.205 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.205 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:31.205 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:31.205 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:31.205 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:31.205 [2024-11-15 11:44:11.522474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.205 [2024-11-15 11:44:11.522847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.205 [2024-11-15 11:44:11.522876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:31.205 [2024-11-15 11:44:11.522893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:31.205 [2024-11-15 11:44:11.523135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:31.205 [2024-11-15 11:44:11.523377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.205 [2024-11-15 11:44:11.523401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.205 [2024-11-15 11:44:11.523415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.206 [2024-11-15 11:44:11.523428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:31.206 [2024-11-15 11:44:11.535908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.206 [2024-11-15 11:44:11.536317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.206 [2024-11-15 11:44:11.536349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:31.206 [2024-11-15 11:44:11.536366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:31.206 [2024-11-15 11:44:11.536580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:31.206 [2024-11-15 11:44:11.536821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.206 [2024-11-15 11:44:11.536847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.206 [2024-11-15 11:44:11.536861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.206 [2024-11-15 11:44:11.536876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.206 [2024-11-15 11:44:11.538203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:31.206 [2024-11-15 11:44:11.549676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.206 [2024-11-15 11:44:11.550054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.206 [2024-11-15 11:44:11.550084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:31.206 [2024-11-15 11:44:11.550100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:31.206 [2024-11-15 11:44:11.550327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:31.206 [2024-11-15 11:44:11.550549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.206 [2024-11-15 11:44:11.550571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.206 [2024-11-15 11:44:11.550586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.206 [2024-11-15 11:44:11.550599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.206 [2024-11-15 11:44:11.563120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.206 [2024-11-15 11:44:11.563509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.206 [2024-11-15 11:44:11.563540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:31.206 [2024-11-15 11:44:11.563558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:31.206 [2024-11-15 11:44:11.563787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:31.206 [2024-11-15 11:44:11.564009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.206 [2024-11-15 11:44:11.564030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.206 [2024-11-15 11:44:11.564044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.206 [2024-11-15 11:44:11.564057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.206 [2024-11-15 11:44:11.576804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.206 [2024-11-15 11:44:11.577263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.206 [2024-11-15 11:44:11.577299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:31.206 [2024-11-15 11:44:11.577327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:31.206 [2024-11-15 11:44:11.577556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:31.206 [2024-11-15 11:44:11.577791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.206 [2024-11-15 11:44:11.577813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.206 [2024-11-15 11:44:11.577828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.206 [2024-11-15 11:44:11.577842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.206 Malloc0 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:31.206 [2024-11-15 11:44:11.590501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.206 [2024-11-15 11:44:11.590859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.206 [2024-11-15 11:44:11.590887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189fa40 with addr=10.0.0.2, port=4420 00:25:31.206 [2024-11-15 11:44:11.590904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189fa40 is same with the state(6) to be set 00:25:31.206 [2024-11-15 11:44:11.591134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189fa40 (9): Bad file descriptor 00:25:31.206 [2024-11-15 11:44:11.591382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.206 [2024-11-15 11:44:11.591405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.206 [2024-11-15 11:44:11.591419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.206 [2024-11-15 11:44:11.591432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:31.206 [2024-11-15 11:44:11.600047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.206 [2024-11-15 11:44:11.604175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.206 11:44:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3036096 00:25:31.465 3660.83 IOPS, 14.30 MiB/s [2024-11-15T10:44:11.892Z] [2024-11-15 11:44:11.761557] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:25:33.333 4140.14 IOPS, 16.17 MiB/s [2024-11-15T10:44:14.693Z] 4672.00 IOPS, 18.25 MiB/s [2024-11-15T10:44:15.626Z] 5074.78 IOPS, 19.82 MiB/s [2024-11-15T10:44:16.999Z] 5412.20 IOPS, 21.14 MiB/s [2024-11-15T10:44:17.932Z] 5682.64 IOPS, 22.20 MiB/s [2024-11-15T10:44:18.865Z] 5918.42 IOPS, 23.12 MiB/s [2024-11-15T10:44:19.798Z] 6108.85 IOPS, 23.86 MiB/s [2024-11-15T10:44:20.735Z] 6275.50 IOPS, 24.51 MiB/s [2024-11-15T10:44:20.735Z] 6409.53 IOPS, 25.04 MiB/s 00:25:40.308 Latency(us) 00:25:40.308 [2024-11-15T10:44:20.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.308 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:40.308 Verification LBA range: start 0x0 length 0x4000 00:25:40.308 Nvme1n1 : 15.01 6412.65 25.05 10297.56 0.00 7637.20 916.29 21359.88 00:25:40.308 [2024-11-15T10:44:20.735Z] =================================================================================================================== 00:25:40.308 [2024-11-15T10:44:20.735Z] Total : 6412.65 25.05 10297.56 0.00 7637.20 916.29 21359.88 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:40.565 rmmod nvme_tcp 00:25:40.565 rmmod nvme_fabrics 00:25:40.565 rmmod nvme_keyring 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3036762 ']' 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3036762 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3036762 ']' 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3036762 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3036762 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3036762' 00:25:40.565 killing process with pid 3036762 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3036762 00:25:40.565 11:44:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3036762 00:25:40.825 11:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:40.825 11:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:40.825 11:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:40.825 11:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:25:40.825 11:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:25:40.825 11:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:40.825 11:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:25:40.825 11:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:40.825 11:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:40.825 11:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.825 11:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.825 11:44:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:43.388 00:25:43.388 real 0m22.499s 00:25:43.388 user 0m58.790s 00:25:43.388 sys 0m4.827s 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:43.388 ************************************ 00:25:43.388 END TEST nvmf_bdevperf 00:25:43.388 ************************************ 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.388 ************************************ 00:25:43.388 START TEST nvmf_target_disconnect 00:25:43.388 ************************************ 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:43.388 * Looking for test storage... 00:25:43.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:43.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.388 --rc genhtml_branch_coverage=1 00:25:43.388 --rc genhtml_function_coverage=1 00:25:43.388 --rc genhtml_legend=1 00:25:43.388 --rc geninfo_all_blocks=1 00:25:43.388 --rc geninfo_unexecuted_blocks=1 00:25:43.388 00:25:43.388 ' 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:43.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.388 --rc genhtml_branch_coverage=1 00:25:43.388 --rc genhtml_function_coverage=1 00:25:43.388 --rc genhtml_legend=1 00:25:43.388 --rc geninfo_all_blocks=1 00:25:43.388 --rc geninfo_unexecuted_blocks=1 00:25:43.388 00:25:43.388 ' 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:43.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.388 --rc genhtml_branch_coverage=1 00:25:43.388 --rc genhtml_function_coverage=1 00:25:43.388 --rc genhtml_legend=1 00:25:43.388 --rc geninfo_all_blocks=1 00:25:43.388 --rc geninfo_unexecuted_blocks=1 00:25:43.388 00:25:43.388 ' 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:43.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.388 --rc genhtml_branch_coverage=1 00:25:43.388 --rc genhtml_function_coverage=1 00:25:43.388 --rc genhtml_legend=1 00:25:43.388 --rc geninfo_all_blocks=1 00:25:43.388 --rc geninfo_unexecuted_blocks=1 00:25:43.388 00:25:43.388 ' 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.388 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:43.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:25:43.389 11:44:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:45.326 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:45.326 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:45.326 Found net devices under 0000:09:00.0: cvl_0_0 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:45.326 Found net devices under 0000:09:00.1: cvl_0_1 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.326 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.327 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.327 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:45.327 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:45.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:25:45.327 00:25:45.327 --- 10.0.0.2 ping statistics --- 00:25:45.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.327 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:25:45.327 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:25:45.327 00:25:45.327 --- 10.0.0.1 ping statistics --- 00:25:45.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.327 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:25:45.327 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.327 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:25:45.327 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:45.327 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.327 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:45.327 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:45.327 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.327 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:45.327 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:45.327 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:45.327 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:45.327 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:45.327 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:45.587 ************************************ 00:25:45.587 START TEST nvmf_target_disconnect_tc1 00:25:45.587 ************************************ 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:45.587 [2024-11-15 11:44:25.873150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.587 [2024-11-15 11:44:25.873224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2195f40 with addr=10.0.0.2, port=4420 00:25:45.587 [2024-11-15 11:44:25.873255] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:45.587 [2024-11-15 11:44:25.873279] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:45.587 [2024-11-15 11:44:25.873293] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:25:45.587 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:45.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:45.587 Initializing NVMe Controllers 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:45.587 00:25:45.587 real 0m0.108s 00:25:45.587 user 0m0.051s 00:25:45.587 sys 0m0.057s 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:45.587 ************************************ 00:25:45.587 END TEST nvmf_target_disconnect_tc1 00:25:45.587 ************************************ 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:45.587 ************************************ 00:25:45.587 START TEST nvmf_target_disconnect_tc2 00:25:45.587 ************************************ 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3039928 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3039928 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3039928 ']' 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:45.587 11:44:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.587 [2024-11-15 11:44:25.992950] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:25:45.587 [2024-11-15 11:44:25.993039] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.846 [2024-11-15 11:44:26.065688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:45.846 [2024-11-15 11:44:26.126466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.846 [2024-11-15 11:44:26.126515] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.846 [2024-11-15 11:44:26.126528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.846 [2024-11-15 11:44:26.126540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.846 [2024-11-15 11:44:26.126550] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.846 [2024-11-15 11:44:26.128034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:45.846 [2024-11-15 11:44:26.128099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:45.846 [2024-11-15 11:44:26.128164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:45.846 [2024-11-15 11:44:26.128167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:45.846 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.846 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:45.846 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:45.846 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:45.846 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:46.104 Malloc0 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:46.104 [2024-11-15 11:44:26.313757] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:46.104 [2024-11-15 11:44:26.341996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3039954 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:46.104 11:44:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:48.011 11:44:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3039928 00:25:48.011 11:44:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Write completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Write completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Write completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Write completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Write completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 [2024-11-15 11:44:28.366869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Write completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Write completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Write completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Write completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Write completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Write completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Write completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Write completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Write completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Write completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Write completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Read completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 Write completed with error (sct=0, sc=8) 00:25:48.011 starting I/O failed 00:25:48.011 [2024-11-15 11:44:28.367183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.011 [2024-11-15 11:44:28.367359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.011 [2024-11-15 11:44:28.367391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.011 qpair failed and we were unable to recover it. 00:25:48.011 [2024-11-15 11:44:28.367494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.011 [2024-11-15 11:44:28.367519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.011 qpair failed and we were unable to recover it. 00:25:48.011 [2024-11-15 11:44:28.367661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.011 [2024-11-15 11:44:28.367685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.011 qpair failed and we were unable to recover it. 00:25:48.011 [2024-11-15 11:44:28.367827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.011 [2024-11-15 11:44:28.367852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.011 qpair failed and we were unable to recover it. 00:25:48.011 [2024-11-15 11:44:28.367969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.011 [2024-11-15 11:44:28.367993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.011 qpair failed and we were unable to recover it. 00:25:48.011 [2024-11-15 11:44:28.368113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.011 [2024-11-15 11:44:28.368154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.011 qpair failed and we were unable to recover it. 00:25:48.011 [2024-11-15 11:44:28.368258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.011 [2024-11-15 11:44:28.368286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.011 qpair failed and we were unable to recover it. 00:25:48.011 [2024-11-15 11:44:28.368401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.011 [2024-11-15 11:44:28.368428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.011 qpair failed and we were unable to recover it. 00:25:48.011 [2024-11-15 11:44:28.368516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.011 [2024-11-15 11:44:28.368542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.011 qpair failed and we were unable to recover it. 00:25:48.011 [2024-11-15 11:44:28.368664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.368691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.368811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.368836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.368976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.369001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.369081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.369106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.369194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.369219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.369350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.369390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.369487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.369513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.369636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.369660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.369758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.369783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.369862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.369887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.369975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.370005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.370127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.370155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.370279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.370310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.370429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.370455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.370550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.370575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.370697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.370723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.370834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.370859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.370955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.370981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.371098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.371124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.371222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.371249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.371381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.371406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.371497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.371522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.371635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.371659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.371739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.371763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.371850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.371875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.371959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.371984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.372095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.372121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.372232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.372256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.372361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.372387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.372480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.372504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.372590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.372615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.372764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.372788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.372893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.372918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.373005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.373029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.373104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.373129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.373245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.373269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.373359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.373384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.012 [2024-11-15 11:44:28.373486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.012 [2024-11-15 11:44:28.373519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.012 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.373628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.373652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.373742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.373765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.373848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.373874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.373984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.374009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.374107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.374131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.374242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.374268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.374370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.374399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.374493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.374519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.374608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.374634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.374745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.374770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.374883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.374911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.375024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.375050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.375147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.375173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.375296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.375328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.375411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.375438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.375520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.375545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.375627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.375652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.375766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.375790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.375905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.375933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.376016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.376041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Write completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Write completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Write completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Write completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Write completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Write completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Write completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Write completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 Read completed with error (sct=0, sc=8) 00:25:48.013 starting I/O failed 00:25:48.013 [2024-11-15 11:44:28.376370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:48.013 [2024-11-15 11:44:28.376464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.376491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.376584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.376608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.376701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.376727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.376844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.376868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.376954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.376979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.377069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.377093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.377172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.013 [2024-11-15 11:44:28.377197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.013 qpair failed and we were unable to recover it. 00:25:48.013 [2024-11-15 11:44:28.377322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.377347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.377459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.377484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.377564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.377588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.377666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.377690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.377805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.377829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.377914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.377939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.378060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.378084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.378192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.378216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.378287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.378318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.378428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.378453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.378530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.378554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.378642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.378666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.378771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.378795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.378881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.378905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.379009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.379033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.379171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.379195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.379272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.379295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.379385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.379409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.379522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.379546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.379628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.379657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.379737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.379761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.379894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.379917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.380013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.380052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.380147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.380174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.380283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.380318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.380414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.380440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.380546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.380573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.380659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.380684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.380791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.380818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.380937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.380961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.381040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.381064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.381169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.381193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.381284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.381315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.381435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.381460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.381552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.381577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.381685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.381709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.381817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.381842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.381950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.381974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.014 qpair failed and we were unable to recover it. 00:25:48.014 [2024-11-15 11:44:28.382083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.014 [2024-11-15 11:44:28.382107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.382185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.382209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.382290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.382331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.382444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.382468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.382558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.382584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.382720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.382744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.382836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.382862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.382950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.382974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.383082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.383128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.383226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.383253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.383362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.383390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.383506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.383533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.383652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.383678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.383762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.383787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.383908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.383934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.384026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.384072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.384171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.384198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.384314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.384342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.384433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.384458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.384541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.384566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.384651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.384677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.384762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.384789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.384878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.384902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.385013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.385039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.385129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.385154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.385236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.385260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.385351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.385379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.385521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.385548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.385659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.385685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.385796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.385822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.385966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.385993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.386100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.015 [2024-11-15 11:44:28.386126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.015 qpair failed and we were unable to recover it. 00:25:48.015 [2024-11-15 11:44:28.386239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.386265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.386344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.386369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.386483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.386507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.386590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.386619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.386755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.386780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.386893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.386919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.387039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.387066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.387176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.387203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.387281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.387313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.387407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.387434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.387520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.387545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.387628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.387653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.387734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.387760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.387865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.387891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.388030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.388056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.388133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.388160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.388276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.388308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.388429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.388455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.388545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.388570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.388681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.388707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.388786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.388811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.388922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.388947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.389063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.389089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.389200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.389224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.389341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.389368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.389465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.389490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.389575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.389601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.389707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.389732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.389820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.389846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.389958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.389984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.390108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.390147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.390267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.390294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.390395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.390421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.390538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.390563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.390638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.390663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.390799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.390823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.390908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.390934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.391010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.391035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.016 qpair failed and we were unable to recover it. 00:25:48.016 [2024-11-15 11:44:28.391107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.016 [2024-11-15 11:44:28.391132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.391215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.391240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.391344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.391374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.391496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.391521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.391630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.391654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.391766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.391795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.391938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.391963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.392050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.392078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.392194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.392219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.392313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.392340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.392457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.392482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.392622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.392648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.392733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.392758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.392879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.392906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.393023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.393051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.393165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.393190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.393300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.393331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.393435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.393461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.393570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.393595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.393694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.393722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.393833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.393858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.393974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.394000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.394124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.394150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.394260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.394285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.394411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.394437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.394522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.394548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.394636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.394661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.394777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.394802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.394917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.394943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.395056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.395083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.395197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.395225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.395342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.395370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.395457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.395488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.395600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.395626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.395714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.395740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.395854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.395879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.395959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.395984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.396071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.396098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.017 [2024-11-15 11:44:28.396223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.017 [2024-11-15 11:44:28.396260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.017 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.396355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.396381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.396523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.396548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.396685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.396710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.396824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.396850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.396957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.396981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.397089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.397114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.397199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.397223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.397339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.397367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.397484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.397509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.397619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.397644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.397749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.397774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.397859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.397887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.398020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.398045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.398183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.398209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.398325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.398352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.398466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.398492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.398630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.398656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.398797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.398823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.398914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.398940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.399056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.399082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.399172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.399196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.399279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.399309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.399396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.399420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.399532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.399557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.399669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.399693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.399768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.399792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.399885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.399913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.400009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.400034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.400120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.400147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.400263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.400289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.400387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.400413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.400504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.400530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.400651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.400678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.400771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.400800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.400906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.400944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.401039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.401065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.401149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.401174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.401292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.018 [2024-11-15 11:44:28.401324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.018 qpair failed and we were unable to recover it. 00:25:48.018 [2024-11-15 11:44:28.401407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.401434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.401543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.401569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.401705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.401730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.401812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.401837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Write completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Write completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Write completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Write completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Write completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Write completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Write completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Write completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Write completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Read completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Write completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Write completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Write completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 Write completed with error (sct=0, sc=8) 00:25:48.019 starting I/O failed 00:25:48.019 [2024-11-15 11:44:28.402147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:48.019 [2024-11-15 11:44:28.402256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.402293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.402408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.402435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.402547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.402573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.402654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.402679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.402791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.402815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.402925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.402951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.403065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.403089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.403217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.403257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.403395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.403425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.403540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.403566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.403701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.403727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.403839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.403871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.403964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.403990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.404078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.404106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.404230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.404269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.404401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.404429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.404520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.404546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.404701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.404752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.404903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.404955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.405033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.405058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.405155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.405193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.405300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.405356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.405491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.019 [2024-11-15 11:44:28.405519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.019 qpair failed and we were unable to recover it. 00:25:48.019 [2024-11-15 11:44:28.405641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.405666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.405773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.405798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.405899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.405924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.406043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.406070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.406184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.406209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.406320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.406359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.406453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.406481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.406563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.406589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.406671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.406696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.406815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.406843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.406936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.406963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.407079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.407105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.407219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.407246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.407424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.407465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.407581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.407607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.407824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.407884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.408011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.408035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.408149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.408174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.408297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.408346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.408430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.408457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.408567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.408593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.408744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.408798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.408948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.409001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.409080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.409106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.409214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.409253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.409367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.409407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.409504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.409533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.409676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.409721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.409934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.409988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.410177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.410222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.410325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.410352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.410439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.410465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.410560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.410587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.020 qpair failed and we were unable to recover it. 00:25:48.020 [2024-11-15 11:44:28.410678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.020 [2024-11-15 11:44:28.410704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.410814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.410839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.410953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.410978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.411069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.411097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.411213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.411240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.411372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.411412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.411530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.411557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.411670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.411695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.411819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.411845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.411964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.411991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.412103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.412130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.412245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.412272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.412379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.412407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.412518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.412543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.412649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.412674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.412759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.412784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.412868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.412895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.413010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.413038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.413130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.413158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.413291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.413336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.413456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.413482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.413621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.413645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.413731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.413761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.413869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.413896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.414007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.414033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.414111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.414137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.414211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.414236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.414352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.414378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.414516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.414541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.414623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.414648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.414742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.414767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.414875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.414900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.414983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.415008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.415144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.415170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.415248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.415273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.415408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.415447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.415584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.415623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.415745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.415773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.021 [2024-11-15 11:44:28.415864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.021 [2024-11-15 11:44:28.415890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.021 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.415976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.416001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.416136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.416162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.416271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.416297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.416428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.416453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.416587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.416627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.416749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.416776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.416891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.416916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.417027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.417053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.417136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.417161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.417317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.417356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.417452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.417478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.417588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.417613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.417723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.417747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.417843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.417867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.418056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.418086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.418171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.418198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.418320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.418347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.418434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.418461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.418558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.418584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.418695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.418721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.418909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.418942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.419053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.419080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.419235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.419262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.419373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.419405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.419490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.419516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.419631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.419656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.419767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.419794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.419933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.419978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.420094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.420128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.420296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.420330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.420443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.420468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.420556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.420581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.420664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.420688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.420873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.420923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.421074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.421118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.421209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.421237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.421327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.421353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.022 [2024-11-15 11:44:28.421447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.022 [2024-11-15 11:44:28.421475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.022 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.421571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.421596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.421708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.421771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.421930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.421962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.422075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.422102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.422180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.422205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.422336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.422376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.422465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.422492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.422579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.422606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.422813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.422876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.422983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.423018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.423155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.423201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.423277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.423308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.423403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.423437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.423535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.423560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.423690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.423731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.423956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.423989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.424155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.424213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.424309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.424338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.424453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.424480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.424603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.424657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.424751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.424777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.424887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.424912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.424995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.425022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.425131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.425157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.425296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.425328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.425440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.425465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.425573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.425600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.425739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.425765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.425847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.425873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.425987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.426013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.426123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.426163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.426252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.426279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.426382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.426409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.426489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.426515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.426630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.426675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.426812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.426837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.426942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.426967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.023 [2024-11-15 11:44:28.427052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.023 [2024-11-15 11:44:28.427077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.023 qpair failed and we were unable to recover it. 00:25:48.024 [2024-11-15 11:44:28.427184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.024 [2024-11-15 11:44:28.427223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.024 qpair failed and we were unable to recover it. 00:25:48.024 [2024-11-15 11:44:28.427355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.024 [2024-11-15 11:44:28.427383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.024 qpair failed and we were unable to recover it. 00:25:48.024 [2024-11-15 11:44:28.427481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.024 [2024-11-15 11:44:28.427507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.024 qpair failed and we were unable to recover it. 00:25:48.318 [2024-11-15 11:44:28.427583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.427609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.427713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.427739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.427833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.427862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.427953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.427980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.428086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.428111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.428200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.428224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.428338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.428364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.428449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.428475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.428583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.428608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.428714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.428739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.428853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.428879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.428954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.428985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.429081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.429106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.429183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.429208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.429343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.429368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.429478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.429503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.429620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.429645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.429730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.429758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.429877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.429906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.429991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.430017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.430097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.430123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.430223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.430249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.430417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.430456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.430547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.430573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.430687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.430712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.430806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.430831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.430941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.430968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.431056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.431083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.431196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.431221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.431351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.431390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.431519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.431545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.431681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.431705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.431791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.431816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.431901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.431925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.432034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.432058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.432198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.432223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.432310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.432336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-11-15 11:44:28.432426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-11-15 11:44:28.432452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.432549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.432576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.432668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.432693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.432785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.432811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.432925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.432950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.433063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.433087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.433169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.433194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.433275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.433307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.433424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.433448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.433566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.433591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.433698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.433722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.433855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.433880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.433984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.434024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.434107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.434134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.434217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.434244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.434352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.434379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.434488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.434514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.434598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.434623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.434741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.434767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.434851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.434881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.434971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.434999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.435115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.435142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.435249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.435276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.435406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.435446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.435573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.435601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.435734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.435781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.435905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.435932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.436047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.436074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.436161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.436187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.436299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.436329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.436415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.436442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.436567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.436605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.436715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.436774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.436914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.436963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.437083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.437110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.437220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.437244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.437361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.437385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.437474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-11-15 11:44:28.437498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-11-15 11:44:28.437586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.437610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.437723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.437747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.437857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.437882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.438015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.438045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.438158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.438182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.438312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.438340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.438456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.438481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.438560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.438586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.438692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.438718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.438830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.438854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.438939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.438964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.439085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.439112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.439193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.439219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.439318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.439358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.439470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.439496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.439590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.439614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.439690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.439714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.439832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.439856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.439936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.439960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.440042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.440067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.440201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.440241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.440370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.440409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.440558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.440585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.440674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.440702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.440793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.440819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.440906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.440934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.441048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.441075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.441162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.441186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.441329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.441354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.441463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.441487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.441603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.441632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.441754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.441777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.441862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.441887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.442018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.442043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.442169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.442209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.442309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.442336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.442443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.442482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.442579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.442606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-11-15 11:44:28.442747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-11-15 11:44:28.442798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.442980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.443036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.443156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.443181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.443298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.443332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.443452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.443478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.443591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.443617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.443762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.443787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.443871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.443897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.443980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.444008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.444123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.444148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.444264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.444289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.444394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.444425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.444571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.444596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.444705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.444730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.444812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.444837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.444912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.444936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.445048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.445073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.445182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.445207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.445347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.445372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.445481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.445511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.445642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.445670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.445763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.445789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.445904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.445930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.446013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.446039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.446156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.446181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.446309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.446349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.446443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.446470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.446552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.446578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.446691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.446716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.446810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.446837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.446949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.446974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.447084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.447109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.447220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.447244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.447331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.447356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.447466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.447491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.447609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.447634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.447743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.447767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.447849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.447873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-11-15 11:44:28.447986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-11-15 11:44:28.448010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.448144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.448185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.448313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.448342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.448428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.448455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.448575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.448600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.448718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.448745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.448838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.448864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.448953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.448979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.449113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.449153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.449276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.449313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.449405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.449432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.449516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.449543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.449656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.449683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.449819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.449851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.449963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.450010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.450137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.450175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.450271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.450299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.450414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.450442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.450524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.450552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.450643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.450668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.450780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.450814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.450952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.450992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.451097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.451130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.451296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.451329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.451417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.451443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.451528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.451553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.451637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.451664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.451773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.451806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.452045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.452071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.452151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.452177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.452310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.452350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.452469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.452495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-11-15 11:44:28.452619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-11-15 11:44:28.452646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.452763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.452788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.452869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.452895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.453015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.453044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.453162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.453189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.453332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.453359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.453473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.453499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.453614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.453639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.453772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.453798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.453878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.453904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.453992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.454020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.454146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.454185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.454310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.454337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.454431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.454455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.454546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.454571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.454701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.454744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.454862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.454895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.455035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.455068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.455210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.455239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.455354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.455381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.455475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.455503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.455589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.455615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.455730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.455757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.455860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.455887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.456026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.456084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.456246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.456272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.456371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.456399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.456487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.456513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.456650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.456689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.456786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.456812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.456903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.456928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.457037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.457061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.457162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.457202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.457294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.457329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.457440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.457467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.457581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.457607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.457695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.457721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.457840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.457866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-11-15 11:44:28.457984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-11-15 11:44:28.458010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.458100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.458139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.458232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.458260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.458390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.458418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.458501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.458526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.458613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.458642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.458732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.458757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.458885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.458941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.459084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.459110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.459199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.459225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.459340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.459366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.459479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.459505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.459590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.459617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.459726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.459751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.459836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.459861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.459945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.459973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.460092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.460120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.460235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.460261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.460370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.460402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.460499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.460525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.460637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.460663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.460775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.460801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.460900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.460940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.461082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.461133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.461249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.461276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.461391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.461418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.461502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.461528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.461621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.461646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.461739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.461765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.461848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.461874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.461963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.461988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.462097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.462126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.462226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.462252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.462369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.462398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.462511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.462537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.462629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.462654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.462765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.462790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.462950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.462978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.463066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-11-15 11:44:28.463092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-11-15 11:44:28.463241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.463268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.463394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.463421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.463570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.463595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.463712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.463738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.463875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.463901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.464017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.464042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.464143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.464183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.464321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.464360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.464464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.464491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.464584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.464610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.464719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.464744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.464833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.464860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.464998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.465024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.465104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.465129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.465230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.465268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.465364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.465393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.465476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.465503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.465617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.465643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.465730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.465756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.465841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.465873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.465981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.466028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.466205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.466264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.466383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.466415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.466532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.466559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.466663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.466710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.466859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.466904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.466989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.467015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.467104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.467131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.467261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.467300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.467435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.467462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.467569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.467594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.467683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.467707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.467818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.467842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.467960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.467987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.468071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.468096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.468202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.468227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.468307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.468333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.468444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-11-15 11:44:28.468469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-11-15 11:44:28.468580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.468606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.468709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.468734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.468807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.468832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.468923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.468963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.469057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.469084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.469176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.469201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.469312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.469336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.469430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.469456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.469537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.469567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.469688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.469715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.469832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.469859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.469949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.469975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.470053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.470079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.470210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.470250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.470378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.470407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.470550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.470577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.470665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.470691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.470808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.470834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.470971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.470997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.471091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.471117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.471229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.471255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.471370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.471398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.471498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.471523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.471635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.471661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.471769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.471793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.471906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.471930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.472045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.472069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.472181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.472206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.472284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.472314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.472404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.472428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.472542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.472566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.472647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.472672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.472779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.472803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.472911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.472936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.473040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.473064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.473217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.473262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.473413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.473443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.473581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.473620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-11-15 11:44:28.473740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-11-15 11:44:28.473765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.473908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.473932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.474018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.474042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.474122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.474147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.474246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.474277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.474409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.474438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.474554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.474582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.474691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.474717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.474856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.474908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.475023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.475050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.475133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.475159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.475255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.475282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.475414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.475453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.475587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.475626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.475718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.475743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.475832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.475857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.475932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.475956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.476060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.476100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.476224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.476253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.476378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.476408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.476527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.476553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.476631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.476656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.476747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.476773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.476888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.476915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.477007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.477037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.477167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.477194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.477283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.477316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.477400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.477426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.477514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.477539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.477632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.477658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.477773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.477825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.477982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.478049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.478197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.478224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-11-15 11:44:28.478320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-11-15 11:44:28.478348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.478459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.478485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.478594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.478620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.478732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.478760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.478872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.478904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.479018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.479046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.479159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.479185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.479328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.479356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.479445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.479474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.479582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.479621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.479716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.479742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.479881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.479907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.479995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.480019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.480138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.480162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.480283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.480316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.480410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.480438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.480527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.480554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.480640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.480666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.480820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.480871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.481055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.481081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.481196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.481222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.481311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.481337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.481427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.481453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.481562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.481589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.481706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.481732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.481842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.481868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.481959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.481986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.482105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.482131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.482217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.482243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.482351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.482378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.482457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.482483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.482572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.482600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.482710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.482736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.482824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.482849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.482933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.482960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.483067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.483106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.483198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.483224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.483337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.483364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-11-15 11:44:28.483473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-11-15 11:44:28.483497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.483584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.483609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.483693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.483717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.483829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.483854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.483930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.483954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.484043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.484072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.484186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.484212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.484314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.484341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.484479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.484505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.484628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.484656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.484761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.484787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.484877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.484904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.485018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.485043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.485125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.485151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.485259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.485283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.485400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.485425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.485536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.485560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.485671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.485698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.485770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.485794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.485888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.485913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.486002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.486031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.486127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.486155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.486285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.486325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.486429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.486455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.486534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.486561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.486655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.486681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.486767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.486793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.486912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.486939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.487046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.487071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.487162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.487188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.487263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.487289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.487375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.487402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.487484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.487510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.487594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.487643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.487847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.487879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.487987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.488019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.488190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.488223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.488339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.488366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.488483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-11-15 11:44:28.488512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-11-15 11:44:28.488660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.488687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.488819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.488852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.488954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.488980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.489098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.489125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.489226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.489264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.489418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.489446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.489535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.489561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.489708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.489757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.489857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.489882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.489970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.489999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.490162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.490201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.490286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.490319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.490431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.490455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.490542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.490566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.490658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.490685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.490796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.490820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.490951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.490999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.491114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.491140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.491227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.491251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.491368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.491397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.491489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.491516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.491645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.491679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.491820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.491847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.491962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.491987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.492110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.492149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.492241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.492268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.492416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.492455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.492576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.492603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.492692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.492718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.492824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.492869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.493024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.493074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.493192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.493226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.493363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.493389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.493475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.493503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.493598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.493624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.493720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.493746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.493882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.493931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.494071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.494098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-11-15 11:44:28.494213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-11-15 11:44:28.494241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.494328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.494353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.494444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.494470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.494583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.494608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.494682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.494707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.494843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.494868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.494984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.495009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.495128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.495156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.495262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.495311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.495435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.495462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.495553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.495580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.495657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.495681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.495840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.495877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.496004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.496058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.496138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.496166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.496282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.496317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.496438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.496465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.496571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.496597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.496737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.496762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.496877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.496926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.497023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.497048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.497158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.497184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.497320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.497347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.497429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.497454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.497569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.497595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.497677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.497703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.497812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.497838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.497954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.497980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.498093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.498118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.498268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.498316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.498417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.498445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-11-15 11:44:28.498546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-11-15 11:44:28.498584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.498678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.498704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.498793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.498818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.498961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.498987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.499098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.499123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.499204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.499230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.499322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.499350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.499431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.499457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.499558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.499596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.499685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.499712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.499846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.499901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.500047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.500095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.500181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.500206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.500323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.500349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.500437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.500461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.500596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.500621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.500738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.500762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.500845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.500872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.500969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.500998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.501101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.501147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.501298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.501335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.501418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.501445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.501547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.501580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.501746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.501796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.501929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.501979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.502115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.502141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.502247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.502273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.502404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.502432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.502521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.502548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.502690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.502716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.502797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.502824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.502934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.502960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.503067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.503092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.503196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.503235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.503358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.503386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.503503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.503530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.503710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.503749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.503953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.503979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-11-15 11:44:28.504060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-11-15 11:44:28.504086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.504209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.504235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.504358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.504397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.504494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.504520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.504689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.504734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.504894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.504940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.505063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.505114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.505248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.505276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.505401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.505432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.505544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.505570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.505734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.505780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.505904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.505946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.506058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.506084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.506185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.506224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.506322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.506349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.506424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.506449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.506563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.506587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.506661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.506685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.506770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.506794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.506876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.506901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.507003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.507041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.507136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.507163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.507312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.507339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.507475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.507501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.507611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.507636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.507746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.507771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.507881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.507907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.507998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.508024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.508107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.508133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.508239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.508265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.508399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.508439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.508533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.508563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.508675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.508701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.508843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.508869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.508983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.509008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.509130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.509157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.509275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.509307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.509423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.509448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-11-15 11:44:28.509561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-11-15 11:44:28.509586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.509669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.509695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.509833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.509858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.509981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.510030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.510128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.510167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.510300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.510348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.510465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.510492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.510608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.510634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.510747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.510773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.510863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.510889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.511035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.511067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.511195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.511234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.511363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.511393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.511509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.511537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.511622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.511669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.511787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.511827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.511980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.512012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.512118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.512151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.512280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.512393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.512487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.512514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.512634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.512659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.512792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.512837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.512977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.513001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.513133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.513178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.513294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.513326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.513437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.513462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.513548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.513576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.513689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.513715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.513793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.513819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.513958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.513983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.514093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.514118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.514208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.514234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.514350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.514376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.514459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.514486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.514580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.514605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.514714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.514739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.514850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.514875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.514962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.514996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.515103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-11-15 11:44:28.515129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-11-15 11:44:28.515248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.515287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.515403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.515442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.515555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.515583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.515666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.515692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.515828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.515854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.515936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.515962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.516076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.516103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.516189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.516216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.516358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.516397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.516525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.516551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.516663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.516688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.516793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.516819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.516962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.516988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.517110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.517135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.517221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.517246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.517362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.517388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.517471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.517496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.517610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.517636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.517743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.517768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.517887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.517912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.518025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.518053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.518168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.518195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.518318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.518344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.518424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.518449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.518572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.518611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.518712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.518740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.518828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.518854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.518935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.518960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.519098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.519124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.519232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.519257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.519368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.519394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.519498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.519524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.519639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.519664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.519777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.519804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.519917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.519943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.520091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.520116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.520232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.520260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.520372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-11-15 11:44:28.520412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-11-15 11:44:28.520530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.520563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.520660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.520686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.520852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.520904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.521085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.521140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.521254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.521279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.521410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.521450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.521542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.521570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.521664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.521690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.521802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.521828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.521923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.521950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.522060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.522086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.522170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.522199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.522320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.522347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.522433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.522460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.522582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.522608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.522701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.522727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.522865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.522891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.523025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.523073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.523183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.523208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.523324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.523351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.523439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.523465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.523573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.523599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.523685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.523711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.523830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.523855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.523942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.523968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.524070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.524109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.524197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.524225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.524348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.524375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.524485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.524512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.524603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.524629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.524715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.524741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.524855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.524882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.524967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.525014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-11-15 11:44:28.525124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-11-15 11:44:28.525150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.525312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.525340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.525457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.525483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.525562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.525589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.525699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.525726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.525855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.525886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.526015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.526040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.526128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.526160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.526275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.526312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.526423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.526449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.526562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.526588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.526669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.526695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.526777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.526803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.526907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.526941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.527073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.527105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.527256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.527295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.527425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.527452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.527529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.527555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.527682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.527727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.527812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.527837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.527975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.528021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.528149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.528177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.528272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.528318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.528448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.528486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.528579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.528607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.528707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.528758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.528934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.528984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.529119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.529145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.529258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.529285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.529404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.529430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.529539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.529565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.529734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.529788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.529983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.530034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.530139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.530173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.530286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.530327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.530464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.530489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.530573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.530599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.530678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.530703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-11-15 11:44:28.530810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-11-15 11:44:28.530836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.530929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.530954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.531044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.531069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.531174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.531200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.531337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.531363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.531500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.531525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.531610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.531636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.531714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.531739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.531854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.531879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.531965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.531996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.532134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.532159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.532278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.532312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.532426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.532453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.532545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.532571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.532655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.532680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.532821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.532873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.532994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.533037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.533206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.533232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.533320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.533346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.533427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.533453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.533541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.533567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.533646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.533672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.533787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.533812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.533949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.533998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.534161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.534210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.534321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.534347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.534435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.534461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.534550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.534575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.534660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.534686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.534772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.534799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.534881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.534907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.535019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.535046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.535128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.535154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.535323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.535363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.535479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.535507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.535647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.535671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.535824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.535880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.536043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.536094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-11-15 11:44:28.536217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-11-15 11:44:28.536243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.536364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.536389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.536495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.536520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.536648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.536685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.536788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.536812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.536922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.536947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.537059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.537085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.537168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.537193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.537308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.537334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.537442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.537467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.537545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.537571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.537687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.537717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.537857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.537882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.538022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.538047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.538129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.538155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.538237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.538262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.538369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.538395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.538485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.538511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.538592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.538620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.538729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.538756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.538841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.538867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.538975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.539002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.539126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.539164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.539316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.539343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.539430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.539454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.539553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.539578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.539658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.539682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.539788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.539812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.539942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.539990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.540110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.540135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.540210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.540234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.540324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.540349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.540434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.540459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.540534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.540558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.540675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.540700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.540802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.540826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.540913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.540938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.541028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.541052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.541164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-11-15 11:44:28.541194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-11-15 11:44:28.541269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.541293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.541379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.541403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.541488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.541512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.541591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.541615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.541717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.541741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.541854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.541878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.541988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.542016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.542118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.542157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.542250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.542278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.542406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.542433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.542525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.542549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.542684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.542709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.542799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.542823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.542936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.542963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.543081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.543106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.543219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.543244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.543333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.543359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.543469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.543494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.543580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.543605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.543692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.543718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.543832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.543856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.543952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.543991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.544086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.544113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.544227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.544253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.544359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.544386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.544503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.544530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.544644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.544675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.544822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.544854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.544964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.544992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.545106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.545131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.545217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.545242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.545353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.545378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-11-15 11:44:28.545491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-11-15 11:44:28.545516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.545628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.545653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.545761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.545786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.545877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.545902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.545988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.546015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.546128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.546154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.546236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.546261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.546378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.546405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.546515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.546541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.546655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.546681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.546788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.546813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.546953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.546979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.547059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.547084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.547167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.547194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.547300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.547346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.547472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.547498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.547588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.547614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.547791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.547817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.547946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.547995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.548140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.548175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.548320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.548356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.548476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.548502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.548631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.548679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.548798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.548849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.548986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.549032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.549124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.549149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.549263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.549288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.549373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.549400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.549492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.549518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.549633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.549660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.549769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.549794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.549908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.549934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.550050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.550075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.550170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.550195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.550289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.550342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.550443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.550470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.550579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.550604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.550741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.550765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-11-15 11:44:28.550874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-11-15 11:44:28.550899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.550983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.551007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.551096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.551121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.551194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.551218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.551331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.551358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.551469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.551493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.551604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.551629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.551709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.551734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.551847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.551871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.551952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.551976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.552059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.552084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.552198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.552226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.552340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.552366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.552453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.552480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.552589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.552615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.552699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.552725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.552844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.552870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.552981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.553006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.553146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.553171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.553316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.553342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.553470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.553501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.553640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.553689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.553850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.553906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.554058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.554108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.554226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.554252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.554363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.554390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.554501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.554527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.554615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.554640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.554782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.554814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.554960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.554992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.555093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.555120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.555203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.555228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.555352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.555391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.555512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.555538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.555642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.555688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.555798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.555844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.555922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.555946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.556043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.556070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.556160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-11-15 11:44:28.556186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-11-15 11:44:28.556263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.556289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.556373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.556399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.556493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.556519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.556664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.556690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.556807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.556833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.556971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.556996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.557105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.557139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.557309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.557357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.557433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.557459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.557565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.557591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.557705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.557731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.557822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.557870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.558068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.558100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.558244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.558270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.558391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.558417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.558553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.558578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.558721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.558747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.558826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.558852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.559003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.559035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.559146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.559178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.559314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.559340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.559448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.559474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.559554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.559580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.559714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.559739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.559868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.559906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.560041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.560073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.560231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.560263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.560406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.560432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.560555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.560594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.560689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.560717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.560845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.560877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.560980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.561006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.561121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.561147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.561257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.561283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.561393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.561433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.561551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.561577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.561692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.561718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.561833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-11-15 11:44:28.561858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-11-15 11:44:28.561983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.562007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.562109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.562134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.562257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.562292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.562395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.562422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.562505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.562531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.562614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.562640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.562717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.562743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.562881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.562906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.563016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.563042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.563128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.563155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.563250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.563277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.563367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.563392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.563529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.563555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.563640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.563669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.563801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.563848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.563961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.563986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.564098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.564124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.564237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.564262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.564370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.564396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.564512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.564538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.564631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.564655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.564759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.564784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.564867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.564891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.564970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.564995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.565152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.565191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.565313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.565341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.565428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.565453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.565570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.565596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.565685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.565710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.565825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.565850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.565968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.565994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.566101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.566126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.566210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.566235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.566336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.566362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.566476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.566501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.566587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.566612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.566703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.566729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.566820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.566845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-11-15 11:44:28.566924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-11-15 11:44:28.566949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.567032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.567057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.567164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.567203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.567334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.567360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.567469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.567494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.567601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.567625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.567764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.567789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.567875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.567899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.567982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.568008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.568092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.568116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.568198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.568223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.568313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.568338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.568475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.568501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.568587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.568611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.568723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.568747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.568861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.568885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.568982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.569006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.569118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.569143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.569235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.569259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.569402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.569426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.569537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.569561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.569667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.569691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.569767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.569792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.569879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.569902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.570006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.570030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.570113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.570137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.570250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.570275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.570417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.570441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.570528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.570554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.570694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.570723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.570834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.570858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.570964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.570988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.571092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.571117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-11-15 11:44:28.571191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-11-15 11:44:28.571215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.571313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.571340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.571476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.571500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.571623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.571648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.571760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.571784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.571899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.571924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.572015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.572039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.572151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.572176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.572258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.572282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.572386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.572411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.572504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.572528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.572626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.572652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.572760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.572784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.572858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.572881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.572994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.573018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.573125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.573149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.573236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.573261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.573363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.573388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.573473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.573498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.573619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.573642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.573720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.573745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.573850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.573874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.573977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.574002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.574090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.574121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.574240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.574265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.574390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.574415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.574532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.574556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.574663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.574687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.574801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.574825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.574917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.574941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.575050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.575074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.575160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.575183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.575291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.575324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.575416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.575441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.575525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.575549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.575645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.575669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.575760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.575786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.575905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.575929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.576005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-11-15 11:44:28.576029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-11-15 11:44:28.576165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.576189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.576275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.576299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.576390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.576414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.576514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.576538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.576632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.576656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.576768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.576792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.576873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.576897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.577006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.577032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.577146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.577171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.577287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.577319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.577440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.577464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.577547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.577577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.577671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.577695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.577832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.577857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.577997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.578022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.578103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.578128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.578244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.578268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.578384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.578410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.578525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.578549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.578670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.578695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.578813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.578837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.578949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.578973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.579104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.579128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.579218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.579242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.579357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.579382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.579512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bff30 is same with the state(6) to be set 00:25:48.348 [2024-11-15 11:44:28.579671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.579709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.579804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.579832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.579945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.579972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.580067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.580092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.580176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.580201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.580289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.580321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.580439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.580474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.580595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.580631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.580758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.580794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.580937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.580973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.581157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.581193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.581386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.581411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.581529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-11-15 11:44:28.581565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-11-15 11:44:28.581726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.581762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.581909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.581945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.582166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.582217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.582320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.582346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.582456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.582480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.582568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.582593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.582722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.582770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.582874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.582923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.583025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.583049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.583159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.583183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.583268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.583292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.583416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.583440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.583556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.583580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.583686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.583715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.583823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.583848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.583930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.583953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.584028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.584052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.584135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.584159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.584293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.584328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.584447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.584472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.584592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.584616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.584703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.584727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.584811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.584835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.584951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.584974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.585077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.585101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.585184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.585208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.585315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.585344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.585460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.585486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.585570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.585596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.585717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.585742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.585879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.585904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.585991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.586016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.586132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.586159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.586269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.586293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.586414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.586439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.586565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.586589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.586672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.586697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.586814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.586838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-11-15 11:44:28.586978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-11-15 11:44:28.587002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.587091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.587115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.587248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.587276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.587365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.587392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.587478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.587505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.587621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.587646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.587764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.587789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.587881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.587906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.588039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.588064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.588182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.588208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.588326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.588351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.588437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.588461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.588586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.588620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.588768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.588822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.588929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.588954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.589045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.589069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.589149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.589174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.589262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.589287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.589401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.589429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.589510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.589536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.589629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.589655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.589770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.589795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.589878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.589903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.589989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.590015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.590129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.590155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.590267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.590291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.590390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.590414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.590530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.590554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.590680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.590704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.590839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.590869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.590955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.590983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.591066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.591092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.591183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.591209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.591328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.591354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.591463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.591488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.591568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.591594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-11-15 11:44:28.591677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-11-15 11:44:28.591705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.591784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.591808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.591921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.591968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.592113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.592137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.592220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.592244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.592359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.592410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.592517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.592541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.592665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.592690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.592771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.592795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.592875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.592899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.592980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.593004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.593086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.593111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.593204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.593232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.593393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.593419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.593510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.593535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.593689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.593714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.593798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.593825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.593912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.593939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.594058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.594094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.594242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.594279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.594414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.594444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.594565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.594601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.594752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.594787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.594909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.594947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.595095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.595131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.595236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.595272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.595397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.595423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.595564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.595589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.595699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.595770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.595990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.596054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.596254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.596279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.596379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.596406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.596496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.596522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.596622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.596648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.596760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.596786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.596949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.597012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.597207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-11-15 11:44:28.597269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-11-15 11:44:28.597448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.597473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.597587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.597670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.597836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.597889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.598020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.598056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.598174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.598210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.598365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.598391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.598481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.598507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.598648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.598684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.598845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.598883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.599060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.599096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.599261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.599297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.599464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.599489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.599612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.599669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.599817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.599898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.600149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.600216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.600390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.600416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.600528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.600553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.600692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.600718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.600842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.600867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.601050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.601086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.601208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.601258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.601395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.601421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.601530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.601555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.601662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.601704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.601824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.601859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.601989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.602026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.602181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.602217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.602359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.602395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.602547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.602583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.602729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.602765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.602912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.602948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.603099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.603135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.603318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.603399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.603607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.603669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.603880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.603942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.604195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.604260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.604449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.604485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.604653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-11-15 11:44:28.604689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-11-15 11:44:28.604807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.604843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.604997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.605033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.605185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.605222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.605374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.605410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.605555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.605591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.605768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.605804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.605954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.605991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.606138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.606174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.606338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.606375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.606531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.606568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.606676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.606712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.606817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.606853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.606997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.607033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.607193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.607230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.607360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.607399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.607581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.607619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.607806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.607844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.608003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.608040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.608223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.608261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.608461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.608497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.608644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.608680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.608796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.608831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.608982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.609018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.609206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.609272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.609451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.609487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.609665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.609710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.609889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.609924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.610116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.610181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.610379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.610414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.610614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.610652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.610804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.610842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.611027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.611065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.611240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.611276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.611439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.611476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.611600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.611637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.611789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.611828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.611974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.612010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.612193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-11-15 11:44:28.612229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-11-15 11:44:28.612419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.612457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.612610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.612648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.612797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.612835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.612953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.612989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.613194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.613257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.613494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.613532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.613664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.613702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.613853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.613891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.614054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.614091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.614273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.614363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.614499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.614537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.614722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.614760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.614885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.614922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.615077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.615114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.615254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.615293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.615420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.615460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.615644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.615681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.615863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.615900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.616083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.616120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.616366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.616404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.616525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.616564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.616693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.616729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.616885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.616924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.617100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.617137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.617290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.617334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.617469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.617506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.617626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.617665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.617794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.617837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.617953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.617990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.618105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.618143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.618273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.618319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.618479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.618517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.618651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.618689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.618837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.618874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.619026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.619062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.619180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.619219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.619377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.619414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.619528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-11-15 11:44:28.619567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-11-15 11:44:28.619690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.619728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.619894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.619930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.620086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.620123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.620320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.620358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.620481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.620519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.620672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.620709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.620874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.620912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.621116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.621176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.621382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.621420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.621549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.621587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.621745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.621783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.621941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.621978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.622108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.622145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.622311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.622349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.622537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.622574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.622696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.622733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.622865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.622903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.623090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.623127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.623285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.623331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.623493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.623534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.623696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.623736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.623897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.623937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.624065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.624104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.624256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.624296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.624453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.624493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.624683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.624722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.624851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.624892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.625020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.625060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.625221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.625261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.625446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.625493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.625658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.625697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.625854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.625893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.626061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.626101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.626288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-11-15 11:44:28.626373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-11-15 11:44:28.626534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.626574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.626721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.626759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.626914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.626970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.627186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.627225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.627373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.627414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.627574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.627613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.627776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.627814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.628003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.628041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.628210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.628249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.628433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.628472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.628658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.628716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.628937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.628995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.629206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.629267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.629481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.629521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.629695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.629735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.629876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.629916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.630039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.630079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.630241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.630281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.630423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.630464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.630593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.630633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.630762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.630801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.630969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.631009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.631196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.631256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.631427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.631471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.631671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.631714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.631874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.631915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.632107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.632171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.632327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.632367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.632510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.632553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.632713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.632754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.632874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.632914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.633079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.633118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.633243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.633286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.633434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.633474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.633621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.633661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.633791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.633840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.633993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.634033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.634225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-11-15 11:44:28.634283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-11-15 11:44:28.634473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.634514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.634651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.634691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.634887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.634927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.635091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.635131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.635293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.635341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.635493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.635532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.635669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.635708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.635872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.635913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.636057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.636099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.636234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.636275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.636454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.636496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.636672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.636715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.636895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.636937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.637082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.637124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.637290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.637340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.637513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.637557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.637699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.637741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.637974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.638028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.638186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.638229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.638384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.638427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.638605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.638647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.638819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.638859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.639050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.639091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.639262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.639312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.639496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.639538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.639681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.639726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.639906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.639948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.640125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.640177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.640358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.640433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.640625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.640667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.640832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.640874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.641074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.641116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.641351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.641413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.641580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.641623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.641759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.641801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.641976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.642017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.642193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.642235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.642415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-11-15 11:44:28.642458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-11-15 11:44:28.642630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.642671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.642804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.642846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.643016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.643058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.643234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.643288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.643485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.643530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.643677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.643719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.643854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.643895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.644054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.644098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.644258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.644325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.644514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.644579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.644811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.644857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.644994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.645037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.645239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.645337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.645518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.645562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.645729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.645771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.645971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.646020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.646200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.646241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.646381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.646423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.646624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.646665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.646828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.646873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.647043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.647085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.647248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.647288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.647477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.647517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.647720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.647761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.647934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.647974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.648137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.648178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.648344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.648399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.648562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.648605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.648761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.648806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.649029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.649073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.649265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.649341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.649533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.649576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.649752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.649795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.649973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.650023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.650231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.650275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.650477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.650522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.650701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.650746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.650916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.650963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-11-15 11:44:28.651177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-11-15 11:44:28.651221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.651367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.651413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.651593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.651637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.651785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.651830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.652009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.652055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.652212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.652257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.652427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.652474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.652664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.652712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.652946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.652992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.653161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.653205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.653416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.653462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.653635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.653681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.653841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.653885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.654071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.654115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.654314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.654360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.654582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.654626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.654779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.654821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.654954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.654998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.655182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.655240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.655476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.655522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.655695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.655740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.655904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.655946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.656087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.656131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.656289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.656353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.656540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.656585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.656768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.656812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.656961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.657007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.657181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.657227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.657421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.657466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.657615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.657659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.657869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.657920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.658146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.658191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.658367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.658416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.658600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.658646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.658809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.658862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.659030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.659075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.659239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.659287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.659502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.659545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.659730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.659786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-11-15 11:44:28.659951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-11-15 11:44:28.659995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.660174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.660219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.660428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.660474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.660603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.660647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.660806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.660850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.661001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.661043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.661177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.661251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.661524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.661593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.661792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.661840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.662022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.662068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.662252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.662300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.662474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.662518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.662702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.662747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.662962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.663035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.663235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.663281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.663455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.663511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.663701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.663755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.663950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.664005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.664221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.664276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.664477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.664532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.664759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.664813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.665059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.665133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.665298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.665391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.665604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.665649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.665842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.665890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.666091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.666138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.666288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.666352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.666513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.666561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.666708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.666756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.666950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.666998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.667182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.667238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.667441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.667491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.667678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.667726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-11-15 11:44:28.667910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-11-15 11:44:28.667959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.668117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.668167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.668380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.668430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.668586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.668633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.668819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.668867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.669103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.669151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.669366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.669414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.669642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.669690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.669875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.669922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.670105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.670152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.670336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.670384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.670591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.670639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.670835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.670882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.671076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.671123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.671275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.671334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.671523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.671572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.671716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.671763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.671956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.672003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.672160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.672207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.672436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.672508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.672749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.672799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.672975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.673024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.673203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.673253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.673440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.673489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.673669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.673732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.673878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.673923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.674119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.674165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.674332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.674382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.674569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.674616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.674765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.674811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.675030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.675077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.675281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.675344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.675500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.675549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.675742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.675789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.675998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.676046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.676240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.676287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.676459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.676505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.676706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.676752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.676949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.676997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.677201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.677247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.677437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.677481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.677643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-11-15 11:44:28.677690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-11-15 11:44:28.677880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.677925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.678086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.678132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.678313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.678359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.678544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.678590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.678731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.678778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.678977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.679021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.679213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.679257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.679461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.679505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.679692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.679737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.679897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.679941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.680092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.680137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.680295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.680365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.680560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.680605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.680755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.680800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.680959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.681004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.681197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.681242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.681442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.681488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.681704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.681749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.681939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.681984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.682166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.682214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.682378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.682423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.682601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.682642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.682821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.682871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.683046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.683087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.683287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.683343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.683527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.683570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.683714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.683758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.683928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.683973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.684137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.684181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.684355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.684409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.684651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.684723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.684917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.684961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.685148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.685191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.685429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.685503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.685748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.685820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.685988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.686034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.686194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.686239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.686497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.686571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.686833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.686906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.687139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.687184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.687331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-11-15 11:44:28.687402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-11-15 11:44:28.687632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.687708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.687942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.688015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.688221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.688265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.688446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.688491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.688633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.688676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.688857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.688900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.689114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.689165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.689348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.689395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.689616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.689662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.689881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.689926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.690077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.690122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.690281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.690342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.690532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.690581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.690768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.690815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.691003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.691052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.691249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.691297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.691496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.691543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.691722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.691767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.691929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.691976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.692171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.692217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.692400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.692446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.692648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.692703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.692909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.692956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.693134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.693182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.693385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.693433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.693593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.693640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.693827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.693875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.694067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.694113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.694264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.694320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.694544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.694589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.694772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.694818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.695006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.695054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.695236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.695281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.695458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.695506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.695684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.695730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.695904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.695952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.696103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.696172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.696409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-11-15 11:44:28.696482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-11-15 11:44:28.696654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.696722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.696926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.696982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.697150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.697196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.697387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.697436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.697630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.697676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.697899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.697951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.698122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.698191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.698401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.698454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.698626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.698674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.698861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.698909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.699084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.699131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.699328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.699398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.699573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.699620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.699805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.699852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.700058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.700112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.700386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.700463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.700675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.700740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.700906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.700958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.701196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.701246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.701509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.701560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.701790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.701840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.701997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.702050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.702246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.702296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.702508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.702567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.702735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.702787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.702988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.703037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.703240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.703290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.703505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.703555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.703738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.703787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.703970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.704020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.704193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.704241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.704480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.704532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.704698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.704747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.704914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.704962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.705161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.705211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.705369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.705419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.705617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.705667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.705839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.705890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-11-15 11:44:28.706112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-11-15 11:44:28.706165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-11-15 11:44:28.706418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-11-15 11:44:28.706468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-11-15 11:44:28.706615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-11-15 11:44:28.706666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-11-15 11:44:28.706858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-11-15 11:44:28.706908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-11-15 11:44:28.707062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-11-15 11:44:28.707113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-11-15 11:44:28.707283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-11-15 11:44:28.707344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-11-15 11:44:28.707532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-11-15 11:44:28.707610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-11-15 11:44:28.707871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-11-15 11:44:28.707924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-11-15 11:44:28.708128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.708185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.708393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.708465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.708710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.708765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.709004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.709058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.709345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.709396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.709625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.709676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.709878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.709928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.710097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.710147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.710325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.710376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.710522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.710573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.710728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.710780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.711013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.711063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.711265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.711326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.711494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.711544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.711751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.711801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.711950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.712000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.712213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.712263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.712486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.712573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.712755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.712808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.712968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.713019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.713254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.713327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.713538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.713590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.713787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.713840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.714044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.714093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.714298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.714369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.714568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.714621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.714865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.714915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.715063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.644 [2024-11-15 11:44:28.715136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.644 qpair failed and we were unable to recover it. 00:25:48.644 [2024-11-15 11:44:28.715338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.715390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.715597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.715649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.715814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.715866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.716058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.716109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.716323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.716374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.716615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.716668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.716919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.716970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.717149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.717203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.717386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.717438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.717659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.717711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.717877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.717927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.718119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.718169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.718378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.718430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.718612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.718665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.718894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.718944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.719150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.719199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.719361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.719428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.719693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.719746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.719953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.720004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.720140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.720191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.720396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.720459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.720683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.720733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.720964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.721014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.721166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.721218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.721472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.721534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.721708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.721759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.721926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.721976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.722135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.722203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.722390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.722459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.722679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.722734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.722983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.723038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.723245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.723299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.723560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.723613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.723816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.723866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.724023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.724093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.724281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.724353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.724565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.724619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.724857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.724907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.645 qpair failed and we were unable to recover it. 00:25:48.645 [2024-11-15 11:44:28.725108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.645 [2024-11-15 11:44:28.725158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.725374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.725426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.725650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.725703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.725861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.725911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.726115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.726165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.726356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.726408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.726665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.726721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.726937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.726991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.727204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.727258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.727483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.727540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.727777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.727834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.728051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.728106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.728283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.728363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.728544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.728601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.728850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.728907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.729130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.729184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.729411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.729466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.729720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.729784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.730023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.730078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.730280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.730360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.730542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.730595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.730783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.730854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.731077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.731132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.731379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.731434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.731648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.731701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.731920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.731982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.732203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.732257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.732526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.732581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.732838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.732905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.733105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.733159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.733372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.733428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.733646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.733700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.733874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.733935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.734223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.734280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.734553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.734607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.734816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.734869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.735089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.735146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.735338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.735395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.646 [2024-11-15 11:44:28.735610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.646 [2024-11-15 11:44:28.735664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.646 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.735911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.735978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.736217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.736275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.736515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.736570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.736784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.736837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.737057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.737110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.737341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.737398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.737605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.737659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.737906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.737968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.738193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.738253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.738497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.738558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.738753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.738813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.739054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.739111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.739321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.739377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.739613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.739681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.739879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.739934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.740144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.740198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.740414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.740471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.740687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.740742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.740912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.740966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.741215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.741270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.741525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.741582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.741835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.741890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.742104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.742160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.742376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.742435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.742629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.742683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.742843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.742896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.743067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.743120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.743333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.743392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.743592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.743651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.743835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.743892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.744117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.744176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.744381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.744449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.744730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.744789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.744983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.745040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.745269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.745346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.745626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.745697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.745906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.745966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.746235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.647 [2024-11-15 11:44:28.746292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.647 qpair failed and we were unable to recover it. 00:25:48.647 [2024-11-15 11:44:28.746549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.746605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.746826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.746886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.747093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.747151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.747420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.747480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.747688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.747746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.747957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.748033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.748257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.748332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.748604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.748661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.748889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.748946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.749145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.749205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.749461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.749522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.749780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.749837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.750050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.750109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.750331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.750394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.750599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.750657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.750877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.750934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.751135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.751192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.751461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.751531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.751721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.751778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.752015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.752073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.752342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.752403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.752612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.752688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.752919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.752977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.753166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.753225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.753514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.753573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.753808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.753870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.754114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.754171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.754419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.754479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.754743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.754799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.755051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.755124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.755326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.755386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.755623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.755680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.755912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.648 [2024-11-15 11:44:28.755969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.648 qpair failed and we were unable to recover it. 00:25:48.648 [2024-11-15 11:44:28.756154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.756220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.756494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.756553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.756787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.756847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.757082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.757139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.757371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.757451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.757730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.757789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.758026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.758083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.758322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.758382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.758575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.758635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.758861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.758918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.759106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.759163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.759386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.759446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.759698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.759763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.760036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.760095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.760360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.760422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.760657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.760716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.760939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.760999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.761239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.761298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.761563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.761634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.761897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.761959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.762182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.762240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.762498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.762557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.762749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.762806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.763072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.763129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.763336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.763398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.763581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.763644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.763843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.763900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.764128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.764185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.764464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.764538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.764791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.764852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.765084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.765142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.765374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.765450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.765621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.765695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.765889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.765947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.766214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.766273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.766558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.766616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.766812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.766884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.767122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.767181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.649 [2024-11-15 11:44:28.767429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.649 [2024-11-15 11:44:28.767489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.649 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.767746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.767804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.768041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.768102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.768341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.768402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.768632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.768692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.768888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.768946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.769174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.769235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.769483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.769546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.769808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.769867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.770164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.770226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.770455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.770524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.770819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.770882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.771174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.771237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.771519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.771581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.771830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.771894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.772129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.772187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.772421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.772482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.772721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.772788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.773067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.773125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.773371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.773435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.773667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.773742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.774028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.774094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.774385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.774452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.774689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.774748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.774944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.775002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.775235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.775295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.775560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.775624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.775823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.775894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.776120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.776191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.776499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.776578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.776795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.776857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.777055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.777118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.777343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.777408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.777695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.777758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.778078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.778143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.778382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.778447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.778662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.778739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.778951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.779021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.650 [2024-11-15 11:44:28.779320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.650 [2024-11-15 11:44:28.779390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.650 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.779658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.779720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.779980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.780043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.780277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.780405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.780683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.780748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.781009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.781072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.781329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.781393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.781652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.781718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.781959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.782023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.782275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.782366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.782617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.782693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.783005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.783078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.783331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.783398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.783649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.783717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.783967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.784032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.784322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.784389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.784655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.784718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.784916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.784977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.785236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.785336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.785543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.785607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.785851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.785913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.786172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.786238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.786557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.786639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.786908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.786972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.787177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.787240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.787553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.787630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.787892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.787955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.788250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.788346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.788569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.788650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.788897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.788963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.789249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.789357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.789612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.789674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.789884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.789963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.790250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.790336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.790569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.790631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.790877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.790943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.791173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.791245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.791567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.791633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.791897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.791960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.651 qpair failed and we were unable to recover it. 00:25:48.651 [2024-11-15 11:44:28.792212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.651 [2024-11-15 11:44:28.792274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.792508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.792588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.792828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.792891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.793181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.793245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.793521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.793586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.793856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.793922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.794213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.794276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.794581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.794643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.794931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.795011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.795276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.795366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.795654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.795716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.795956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.796030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.796332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.796411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.796640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.796703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.796941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.797003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.797238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.797323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.797552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.797623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.797814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.797876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.798135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.798198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.798472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.798547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.798797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.798863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.799087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.799151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.799444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.799510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.799800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.799882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.800133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.800196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.800479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.800548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.800791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.800852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.801081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.801161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.801425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.801491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.801734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.801796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.802001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.802063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.802347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.802426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.802686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.802749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.803009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.803070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.803350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.803416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.803698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.803764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.804019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.804082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.804369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.804435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.804651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.652 [2024-11-15 11:44:28.804725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.652 qpair failed and we were unable to recover it. 00:25:48.652 [2024-11-15 11:44:28.804920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.804987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.805230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.805294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.805574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.805638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.805857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.805919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.806164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.806233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.806577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.806641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.806848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.806913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.807177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.807241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.807540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.807607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.807848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.807911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.808129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.808192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.808492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.808577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.808849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.808913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.809159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.809222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.809441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.809504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.809715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.809778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.810086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.810152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.810410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.810475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.810715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.810778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.811025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.811091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.811363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.811432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.811654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.811717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.811982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.812044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.812333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.812408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.812721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.812786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.812987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.813052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.813313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.813388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.813707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.813781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.814040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.814103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.814380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.814447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.814735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.814797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.815039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.815105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.815358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.653 [2024-11-15 11:44:28.815425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.653 qpair failed and we were unable to recover it. 00:25:48.653 [2024-11-15 11:44:28.815674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.815737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.815988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.816051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.816273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.816354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.816592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.816676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.816906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.816970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.817211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.817275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.817589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.817652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.817881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.817947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.818233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.818297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.818586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.818657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.818940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.819006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.819291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.819388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.819628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.819690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.819978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.820040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.820293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.820440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.820746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.820808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.821051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.821121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.821388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.821454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.821683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.821756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.822010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.822073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.822373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.822438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.822693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.822756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.823014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.823080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.823369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.823434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.825272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.825396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.825654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.825717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.825980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.826043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.826348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.826413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.826648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.826718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.826973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.827036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.827291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.827398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.827614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.827676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.827938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.828004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.828210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.828273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.828568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.828643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.828929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.829008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.829282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.829379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.829634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.829696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.829923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.654 [2024-11-15 11:44:28.829990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.654 qpair failed and we were unable to recover it. 00:25:48.654 [2024-11-15 11:44:28.830235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.830332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.830576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.830641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.830895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.830958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.831215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.831278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.831511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.831573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.831877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.831943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.832147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.832210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.832514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.832581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.832827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.832889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.833143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.833208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.833474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.833538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.833760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.833822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.834073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.834148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.834418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.834485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.834709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.834772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.835066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.835129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.835384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.835466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.835682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.835745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.835994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.836057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.836296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.836381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.836622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.836684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.836952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.837019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.837265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.837356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.837658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.837721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.837923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.837986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.838246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.838343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.838637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.838701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.838966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.839029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.839231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.839297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.839606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.839672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.839935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.839998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.840242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.840321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.840595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.840663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.840907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.840971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.841227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.841292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.841621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.841684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.841931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.842012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.842254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.842356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.655 qpair failed and we were unable to recover it. 00:25:48.655 [2024-11-15 11:44:28.842568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.655 [2024-11-15 11:44:28.842633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.842893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.842955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.843218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.843283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.843524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.843591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.843850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.843912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.844194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.844270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.844566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.844632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.844848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.844911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.845203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.845264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.845578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.845650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.845957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.846020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.846295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.846393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.846648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.846712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.846940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.847010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.847234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.847298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.847619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.847681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.847930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.848005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.848323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.848400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.848656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.848718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.848932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.848995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.849245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.849332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.849568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.849632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.849915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.849977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.850223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.850285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.850545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.850620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.850883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.850951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.851199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.851262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.851537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.851601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.851811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.851876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.852106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.852170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.852377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.852443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.852739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.852810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.853052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.853118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.853373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.853439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.853711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.853773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.854020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.854088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.854333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.854412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.854707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.854770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.855012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.656 [2024-11-15 11:44:28.855074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.656 qpair failed and we were unable to recover it. 00:25:48.656 [2024-11-15 11:44:28.855376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.855456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.855699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.855762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.856012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.856074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.856277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.856369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.856633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.856696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.856917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.856994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.857229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.857291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.857563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.857626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.857883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.857945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.858212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.858285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.858551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.858616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.858906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.858967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.859168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.859230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.859554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.859627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.859877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.859940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.860184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.860247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.860512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.860588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.860819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.860899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.861168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.861231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.861530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.861598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.861885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.861952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.862242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.862327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.862537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.862598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.862849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.862912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.863197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.863261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.863562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.863629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.863865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.863928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.864152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.864219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.864508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.864586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.864847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.864913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.865133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.865200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.865498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.865569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.865812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.865879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.866114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.866179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.657 [2024-11-15 11:44:28.866437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.657 [2024-11-15 11:44:28.866502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.657 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.866758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.866820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.867112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.867189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.867446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.867511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.867804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.867867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.868103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.868166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.868368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.868446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.868731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.868802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.869045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.869109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.869357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.869421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.869656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.869719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.869983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.870050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.870327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.870401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.870655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.870720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.870957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.871020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.871252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.871337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.871615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.871679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.871937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.872000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.872267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.872362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.872637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.872702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.872971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.873034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.873225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.873288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.873561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.873643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.873941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.874002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.874292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.874399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.874615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.874679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.874975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.875040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.875249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.875336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.875589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.875655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.875878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.875940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.876188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.876254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.876562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.876627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.876824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.876892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.877140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.877216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.877546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.877617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.877821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.877884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.878072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.878134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.878377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.878442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.878733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.658 [2024-11-15 11:44:28.878799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.658 qpair failed and we were unable to recover it. 00:25:48.658 [2024-11-15 11:44:28.879000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.879067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.879270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.879350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.879564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.879626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.879824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.879891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.880172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.880236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.880459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.880524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.880763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.880825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.881102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.881167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.881469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.881535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.881779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.881841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.882077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.882138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.882447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.882514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.882806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.882869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.883101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.883163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.883432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.883497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.883764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.883830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.884078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.884140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.884373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.884439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.884695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.884769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.885052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.885116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.885329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.885398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.885647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.885717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.885981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.886043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.886337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.886406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.886654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.886717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.886910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.886972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.887246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.887353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.887687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.887751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.887963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.888026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.888278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.888364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.888587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.888655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.888904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.888970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.889214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.889276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.889553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.889616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.889863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.889931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.890256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.890339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.890559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.890621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.890860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.890923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.659 [2024-11-15 11:44:28.891123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.659 [2024-11-15 11:44:28.891188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.659 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.891446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.891510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.891756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.891819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.892060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.892122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.892383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.892459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.892709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.892773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.893036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.893099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.893350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.893414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.893680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.893744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.894009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.894072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.894360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.894426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.894668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.894748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.894982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.895046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.895336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.895402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.895662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.895725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.895980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.896051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.896346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.896415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.896635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.896698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.896973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.897039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.897280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.897366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.897627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.897691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.897939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.898000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.898215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.898277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.898581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.898656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.898929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.899003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.899283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.899365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.899618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.899695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.899936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.900000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.900216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.900281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.900577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.900640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.900929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.901008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.901258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.901343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.901592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.901654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.901908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.901969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.902184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.902250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.902566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.902631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.902877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.902940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.903169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.903248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.903562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.903628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.660 qpair failed and we were unable to recover it. 00:25:48.660 [2024-11-15 11:44:28.903841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.660 [2024-11-15 11:44:28.903905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.904146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.904208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.904538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.904607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.904808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.904873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.905118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.905179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.905442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.905507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.905787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.905861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.906163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.906227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.906536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.906602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.906818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.906880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.907073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.907138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.907381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.907447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.907701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.907777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.907998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.908060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.908269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.908363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.908620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.908682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.908906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.908970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.909219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.909282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.909597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.909664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.909953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.910016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.910265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.910343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.910637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.910702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.910977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.911043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.911231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.911293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.911540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.911602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.911812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.911873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.912119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.912185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.912374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.912441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.912702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.912765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.912984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.913046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.913315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.913388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.913647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.913711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.913994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.914056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.914369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.914434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.914736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.914803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.915012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.915074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.915327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.915394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.915683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.915744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.916006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.661 [2024-11-15 11:44:28.916071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.661 qpair failed and we were unable to recover it. 00:25:48.661 [2024-11-15 11:44:28.916331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.916408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.916691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.916753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.916996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.917059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.917337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.917404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.917612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.917677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.917936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.918002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.918260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.918367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.918646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.918713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.918909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.918972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.919220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.919281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.919558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.919625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.919912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.919976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.920224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.920286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.920564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.920627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.920853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.920927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.921161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.921226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.921427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.921492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.921728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.921791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.922080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.922146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.922407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.922473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.922709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.922772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.923009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.923071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.923294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.923394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.923675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.923740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.923988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.924052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.924286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.924373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.924629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.924707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.924914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.924979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.925267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.925356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.925624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.925686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.925993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.926058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.926258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.926348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.926620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.926684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.926928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.662 [2024-11-15 11:44:28.926989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.662 qpair failed and we were unable to recover it. 00:25:48.662 [2024-11-15 11:44:28.927238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.927326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.927568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.927631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.927909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.927971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.928214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.928286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.928530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.928595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.928795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.928864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.929122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.929185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.929471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.929553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.929816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.929879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.930158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.930220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.930502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.930566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.930763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.930831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.931068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.931132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.931418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.931483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.931666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.931730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.931959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.932022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.932288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.932380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.932636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.932699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.932944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.933007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.933259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.933348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.933630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.933694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.933906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.933969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.934232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.934294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.934581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.934647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.934901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.934964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.935175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.935237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.935488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.935551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.935786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.935866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.936092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.936157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.936438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.936503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.936750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.936812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.937012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.937083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.937339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.937404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.937603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.937666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.937912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.937985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.938265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.938370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.938607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.938671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.938933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.938994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.663 [2024-11-15 11:44:28.939189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.663 [2024-11-15 11:44:28.939251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.663 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.939509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.939573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.939801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.939865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.940118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.940180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.940419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.940485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.940702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.940765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.941031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.941097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.941355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.941422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.941634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.941695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.941977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.942039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.942337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.942407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.942656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.942719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.942944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.943006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.943247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.943341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.943566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.943628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.943867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.943930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.944154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.944218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.944508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.944587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.944800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.944862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.945110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.945170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.945454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.945519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.945759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.945837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.946044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.946106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.946286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.946389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.946679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.946742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.946953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.947031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.947295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.947378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.947596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.947662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.947958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.948020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.948261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.948356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.948678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.948741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.948991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.949054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.949331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.949396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.949713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.949776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.950067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.950129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.950349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.950416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.950685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.950756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.951034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.951096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.951336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.951401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.664 qpair failed and we were unable to recover it. 00:25:48.664 [2024-11-15 11:44:28.951653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.664 [2024-11-15 11:44:28.951716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.951945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.952011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.952250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.952332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.952609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.952672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.952921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.952984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.953241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.953335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.953575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.953638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.953868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.953931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.954187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.954249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.954538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.954602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.954914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.954978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.955222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.955286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.955623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.955687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.955895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.955975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.956269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.956356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.956581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.956644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.956863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.956927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.957217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.957283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.957595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.957658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.957936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.957998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.958287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.958374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.958650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.958715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.958995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.959058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.959290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.959375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.959621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.959687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.959987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.960050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.960296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.960383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.960633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.960709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.960945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.961010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.961212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.961273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.961529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.961593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.961814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.961876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.962139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.962206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.962543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.962610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.962890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.962953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.963171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.963253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.963531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.963595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.963855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.963917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.964160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.665 [2024-11-15 11:44:28.964227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.665 qpair failed and we were unable to recover it. 00:25:48.665 [2024-11-15 11:44:28.964550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.964618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.964911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.964973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.965226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.965289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.965529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.965608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.965909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.965974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.966247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.966334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.966563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.966626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.966835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.966910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.967150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.967213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.967485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.967551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.967783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.967844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.968126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.968204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.968520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.968589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.968788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.968862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.969111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.969174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.969456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.969537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.969801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.969864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.970100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.970162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.970441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.970504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.970748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.970829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.971063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.971129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.971392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.971457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.971702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.971766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.972064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.972131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.972421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.972486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.972699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.972762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.973015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.973079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.973336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.973418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.973678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.973742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.973962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.974025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.974268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.974349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.974639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.974721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.974988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.975055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.975252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.975331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.975578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.975640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.975840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.975903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.976163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.666 [2024-11-15 11:44:28.976229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.666 qpair failed and we were unable to recover it. 00:25:48.666 [2024-11-15 11:44:28.976461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.976526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.976768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.976830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.977041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.977105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.977357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.977446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.977699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.977763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.978010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.978072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.978314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.978384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.978630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.978707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.979012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.979075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.979286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.979367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.979606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.979668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.979953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.980018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.980326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.980391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.980627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.980688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.980911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.980973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.981264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.981357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.981601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.981663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.981914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.981977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.982220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.982283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.982576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.982642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.982852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.982915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.983130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.983192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.983431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.983498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.983781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.983848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.984061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.984124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.984388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.984454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.984709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.984771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.985048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.985126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.985380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.985446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.985688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.985750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.986035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.986108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.986382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.986464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.986723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.986787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.986984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.987047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.987256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.987342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.667 [2024-11-15 11:44:28.987622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.667 [2024-11-15 11:44:28.987689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.667 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.987903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.987965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.988245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.988323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.988582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.988644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.988909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.988980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.989230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.989292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.989603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.989666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.989914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.989977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.990250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.990335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.990642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.990705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.990924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.990992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.991190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.991251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.991530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.991598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.991846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.991909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.992077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.992139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.992394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.992459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.992658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.992728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.992990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.993054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.993323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.993388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.993605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.993671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.993914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.993981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.994169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.994231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.994514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.994578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.994837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.994903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.995146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.995212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.995517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.995582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.995860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.995922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.996202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.996278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.996524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.996588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.996862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.996925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.997169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.997233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.997523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.997603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.997814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.997877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.998123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.998184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.998464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.998530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.998788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.998858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.999148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.999212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.999464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.999530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:28.999742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.668 [2024-11-15 11:44:28.999804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.668 qpair failed and we were unable to recover it. 00:25:48.668 [2024-11-15 11:44:29.000011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.000077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.000320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.000396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.000625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.000688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.000895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.000957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.001189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.001252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.001526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.001597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.001830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.001894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.002147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.002210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.002461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.002527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.002780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.002846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.003099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.003162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.003406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.003471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.003757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.003833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.004132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.004197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.004464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.004530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.004725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.004788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.005027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.005089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.005353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.005419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.005646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.005711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.005994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.006057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.006337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.006418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.006654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.006721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.006973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.007037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.007272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.007364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.007626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.007717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.007958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.008021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.008258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.008341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.008593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.008656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.008922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.009001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.009254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.009358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.009619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.009682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.009921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.009984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.010197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.010262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.010496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.010564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.010844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.010907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.011110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.011175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.011418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.011503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.011746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.011812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.012017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.669 [2024-11-15 11:44:29.012080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.669 qpair failed and we were unable to recover it. 00:25:48.669 [2024-11-15 11:44:29.012344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.012411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.012632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.012697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.012903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.012982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.013247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.013330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.013546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.013609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.013884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.013947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.014209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.014274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.014568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.014632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.014876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.014939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.015237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.015298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.015592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.015658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.015954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.016017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.016262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.016357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.016583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.016647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.016878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.016955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.017206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.017272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.017531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.017595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.017836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.017898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.018189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.018255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.018471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.018534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.018758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.018821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.019055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.019117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.019364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.019432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.019723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.019790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.020045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.020108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.020328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.020396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.020698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.020762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.021016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.021083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.021279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.021363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.021627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.021690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.021934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.021998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.022262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.022356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.022623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.022687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.022936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.023000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.023244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.023329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.023604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.023670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.023916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.023980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.024183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.024246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.670 [2024-11-15 11:44:29.024504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.670 [2024-11-15 11:44:29.024568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.670 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.024849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.024921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.025131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.025195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.025474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.025541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.025822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.025884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.026132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.026195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.026512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.026578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.026834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.026897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.027103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.027167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.027407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.027472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.027720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.027783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.028017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.028080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.028328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.028392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.028657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.028721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.028904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.028968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.032465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.032566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.032886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.032955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.033215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.033283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.033576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.033640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.033896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.033959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.034207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.034271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.034562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.034629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.034841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.034905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.035130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.035193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.035449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.035516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.035766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.035829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.036033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.036095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.036337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.036403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.036647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.036711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.037013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.037077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.037338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.037403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.037643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.037707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.037940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.038002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.671 [2024-11-15 11:44:29.038194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.671 [2024-11-15 11:44:29.038256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.671 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.038492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.038556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.038772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.038838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.039082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.039145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.039425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.039490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.039741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.039803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.040027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.040090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.040331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.040396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.040675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.040740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.040980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.041055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.041358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.041422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.041711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.041774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.042034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.042097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.042320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.042384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.042591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.042654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.042910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.042973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.043207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.043270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.043566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.043630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.043846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.043907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.044201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.044263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.044565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.044629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.044829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.044891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.045147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.045209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.045457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.045524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.045748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.045811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.046075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.046137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.046338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.046402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.046656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.046719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.046963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.047025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.047231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.047294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.047557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.047621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.047827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.047890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.048138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.048199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.048465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.048530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.048765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.048829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.049030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.049097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.049346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.049424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.049703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.049767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.049973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.050035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-11-15 11:44:29.050275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-11-15 11:44:29.050355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-11-15 11:44:29.050601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-11-15 11:44:29.050664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-11-15 11:44:29.050854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-11-15 11:44:29.050920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.940 [2024-11-15 11:44:29.051161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.940 [2024-11-15 11:44:29.051225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.940 qpair failed and we were unable to recover it. 00:25:48.940 [2024-11-15 11:44:29.051501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.940 [2024-11-15 11:44:29.051566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.940 qpair failed and we were unable to recover it. 00:25:48.940 [2024-11-15 11:44:29.051802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.940 [2024-11-15 11:44:29.051865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.940 qpair failed and we were unable to recover it. 00:25:48.940 [2024-11-15 11:44:29.052074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.940 [2024-11-15 11:44:29.052138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.940 qpair failed and we were unable to recover it. 00:25:48.940 [2024-11-15 11:44:29.052395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.940 [2024-11-15 11:44:29.052460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.940 qpair failed and we were unable to recover it. 00:25:48.940 [2024-11-15 11:44:29.052730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.940 [2024-11-15 11:44:29.052793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.940 qpair failed and we were unable to recover it. 00:25:48.940 [2024-11-15 11:44:29.053035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.940 [2024-11-15 11:44:29.053101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.940 qpair failed and we were unable to recover it. 00:25:48.940 [2024-11-15 11:44:29.053341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.940 [2024-11-15 11:44:29.053407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.940 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.053702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.053766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.054017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.054081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.054332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.054398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.054644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.054706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.054960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.055024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.055268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.055364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.055581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.055644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.055889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.055952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.056189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.056252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.056467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.056531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.056772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.056836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.057053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.057116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.057335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.057399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.057606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.057680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.057895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.057957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.058248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.058330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.058570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.058633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.058882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.058944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.059184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.059247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.059554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.059618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.059893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.059955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.060210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.060274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.060566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.060629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.060856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.060919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.061163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.061225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.061460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.061524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.061809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.061872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.062203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.062322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.062629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.062720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.062996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.063090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.063394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.063486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.063803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.063893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.064242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.064356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.064666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.064755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.065100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.065190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.065558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.065651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.065998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.941 [2024-11-15 11:44:29.066085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.941 qpair failed and we were unable to recover it. 00:25:48.941 [2024-11-15 11:44:29.066434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.066525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.066838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.066909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.067170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.067236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.067511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.067604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.067865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.067931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.068175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.068240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.068517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.068584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.068830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.068897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.069206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.069293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.069628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.069717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.070064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.070155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.070505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.070592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.070941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.071023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.071351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.071449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.071799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.071887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.072205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.072294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.072652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.072743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.073071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.073143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.073413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.073480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.073695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.073761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.074015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.074084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.074373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.074440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.074724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.074788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.075073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.075138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.075433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.075520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.075830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.075919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.076238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.076343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.076613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.076704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.077060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.077147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.077506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.077596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.077921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.078012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.078330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.078430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.078749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.078839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.079154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.079245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.079652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.079721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.079984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.080049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.080233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.080298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.080616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.080681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.942 qpair failed and we were unable to recover it. 00:25:48.942 [2024-11-15 11:44:29.080890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.942 [2024-11-15 11:44:29.080956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.081200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.081265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.081538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.081603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.081916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.082007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.082350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.082442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.082799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.082901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.083251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.083358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.083637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.083725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.084021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.084109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.084463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.084555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.084908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.084994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.085318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.085420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.085727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.085799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.086052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.086116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.086406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.086473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.086767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.086832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.087084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.087153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.087420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.087488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.087743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.087809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.088034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.088119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.088395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.088485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.088832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.088921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.089279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.089398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.089704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.089789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.090065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.090154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.090467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.090557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.090859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.090948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.091266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.091388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.091699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.091789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.092050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.092144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.092429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.092498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.092727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.092794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.093015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.093080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.093369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.093436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.093683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.093751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.094000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.094064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.094264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.943 [2024-11-15 11:44:29.094344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.943 qpair failed and we were unable to recover it. 00:25:48.943 [2024-11-15 11:44:29.094647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.094738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.095052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.095141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.095444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.095534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.095818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.095906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.096220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.096332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.096676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.096766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.097110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.097200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.097573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.097664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.097971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.098074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.098374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.098522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.098819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.098876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.099062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.099129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.099389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.099459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.099695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.099764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.099977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.100042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.100279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.100363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.100610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.100679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.101002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.101074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.101368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.101464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.101748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.101816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.102007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.102072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.102323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.102388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.102685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.102750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.102986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.103050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.103332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.103397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.104768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.104841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.105068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.105134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.105367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.105437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.105694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.105759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.106036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.106099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.106345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.106411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.106654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.106718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.106972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.107036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.107324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.107389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.107637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.107701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.107957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.108021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.108270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.108348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.108609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.944 [2024-11-15 11:44:29.108673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.944 qpair failed and we were unable to recover it. 00:25:48.944 [2024-11-15 11:44:29.108954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.109017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.109280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.109361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.109617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.109682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.109881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.109948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.111446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.111519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.111773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.111839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.112131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.112196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.112442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.112511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.112768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.112831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.113075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.113139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.113391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.113468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.113679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.113741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.115125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.115196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.115468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.115534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.115818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.115882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.116128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.116191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.116445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.116511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.116724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.116786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.117033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.117096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.117338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.117403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.117687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.117749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.118024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.118086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.118367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.118432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.118669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.118732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.118951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.119018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.119330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.119395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.119637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.119705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.119956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.120020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.120264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.120346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.120590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.120655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.120898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.120961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.121201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.945 [2024-11-15 11:44:29.121265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.945 qpair failed and we were unable to recover it. 00:25:48.945 [2024-11-15 11:44:29.121544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.121610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.121890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.121952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.122202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.122264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.122470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.122534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.122770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.122833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.123113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.123177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.123451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.123516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.123760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.123823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.124077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.124139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.124382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.124447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.124703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.124767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.125009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.125072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.125300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.125378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.125631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.125694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.125902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.125965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.126208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.126273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.126505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.126570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.126855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.126918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.127137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.127200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.127482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.127547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.127842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.127906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.128150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.128212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.128467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.128534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.128795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.128857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.129093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.129157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.129414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.129479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.129694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.129759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.130005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.130068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.130279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.130366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.130650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.130713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.130924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.130989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.131247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.131326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.131593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.131658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.131940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.132004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.132313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.132377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.946 [2024-11-15 11:44:29.132613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.946 [2024-11-15 11:44:29.132678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.946 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.132953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.133017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.133236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.133315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.133530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.133594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.133809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.133874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.134117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.134180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.134436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.134500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.134704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.134771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.135050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.135113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.135351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.135418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.135602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.135676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.135889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.135957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.136155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.136220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.136488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.136555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.136752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.136818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.137047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.137113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.137373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.137442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.137734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.137797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.137994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.138057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.138264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.138344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.138553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.138619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.138864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.138929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.139176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.139240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.139517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.139582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.139871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.139934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.140186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.140251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.140537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.140616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.140876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.140939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.141222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.141285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.141529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.141594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.141901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.141976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.142283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.142368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.142630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.142695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.142941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.143007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.143341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.143409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.143697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.143763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.947 qpair failed and we were unable to recover it. 00:25:48.947 [2024-11-15 11:44:29.144008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.947 [2024-11-15 11:44:29.144072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.144368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.144433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.144721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.144783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.145029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.145096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.145382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.145448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.145727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.145790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.146039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.146101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.146332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.146396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.146679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.146742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.146934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.146996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.147231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.147293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.147568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.147631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.147917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.147979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.148251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.148327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.148577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.148651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.148909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.148971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.149223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.149288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.149520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.149588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.149809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.149874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.150164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.150228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.150511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.150577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.150831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.150893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.151078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.151142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.151339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.151407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.151660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.151723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.151958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.152021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.152237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.152300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.152570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.152632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.152931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.152994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.153245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.153325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.153590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.153653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.153903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.153965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.154229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.154292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.154590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.154655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.154868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.154931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.155214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.155278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.155582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.155644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.155926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.948 [2024-11-15 11:44:29.155989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.948 qpair failed and we were unable to recover it. 00:25:48.948 [2024-11-15 11:44:29.156280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.156363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.156584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.156647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.156930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.156992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.157258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.157339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.157631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.157693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.157975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.158038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.158286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.158371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.158655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.158718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.158920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.158983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.159226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.159289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.159628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.159692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.159939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.160002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.160233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.160296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.160583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.160647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.160901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.160964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.161206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.161269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.161545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.161621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.161803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.161866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.162058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.162118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.162414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.162480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.162785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.162847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.163085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.163147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.163445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.163510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.163798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.163860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.164088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.164152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.164380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.164446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.164694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.164757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.165028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.165091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.165339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.165405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.165648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.165712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.165968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.166032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.166327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.166390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.166631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.166695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.949 qpair failed and we were unable to recover it. 00:25:48.949 [2024-11-15 11:44:29.166978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.949 [2024-11-15 11:44:29.167041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.167288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.167376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.167658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.167721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.167962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.168028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.168288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.168373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.168636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.168699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.168955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.169017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.169242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.169325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.169565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.169630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.169901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.169965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.170214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.170279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.170553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.170616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.170866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.170929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.171215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.171280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.171549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.171615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.171855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.171918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.172155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.172220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.172467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.172532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.172783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.172847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.173130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.173192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.173399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.173463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.173666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.173732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.174005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.174067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.174354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.174430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.174627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.174693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.174930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.174992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.175236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.175317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.175603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.175667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.175963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.176027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.176326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.176390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.176673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.176737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.177017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.950 [2024-11-15 11:44:29.177080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.950 qpair failed and we were unable to recover it. 00:25:48.950 [2024-11-15 11:44:29.177340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.177408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.177689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.177754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.177999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.178065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.178274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.178370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.178615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.178680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.178966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.179029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.179219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.179286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.179557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.179624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.179907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.179970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.180201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.180266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.180505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.180571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.180792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.180855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.181132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.181196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.181475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.181540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.181823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.181886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.182124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.182187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.182473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.182537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.182776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.182840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.183087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.183152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.183442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.183506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.183704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.183768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.184002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.184068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.184320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.184384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.184673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.184737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.185004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.185069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.185292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.185374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.185616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.185680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.185955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.186018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.186256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.186350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.186632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.186695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.186921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.186983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.187273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.187365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.187650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.187714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.187937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.188001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.188229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.188292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.188475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.188539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.188811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.188875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.189156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.951 [2024-11-15 11:44:29.189213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.951 qpair failed and we were unable to recover it. 00:25:48.951 [2024-11-15 11:44:29.189466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.189528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.189792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.189857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.190138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.190201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.190511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.190571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.190769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.190827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.191075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.191141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.191430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.191490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.191759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.191823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.192061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.192124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.192382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.192448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.192671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.192735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.193015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.193078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.193328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.193403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.193616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.193670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.193888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.193943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.194173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.194233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.194497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.194556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.194830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.194890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.195092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.195150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.195389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.195445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.195633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.195688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.195920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.195977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.196194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.196252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.196576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.196636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.196870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.196934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.197184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.197243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.197523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.197587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.197846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.197909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.198153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.198212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.198442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.198503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.198736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.198794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.199045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.199103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.199335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.199399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.199628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.199697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.199952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.200016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.952 qpair failed and we were unable to recover it. 00:25:48.952 [2024-11-15 11:44:29.200257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.952 [2024-11-15 11:44:29.200332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.200557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.200616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.200870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.200930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.201188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.201251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.201533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.201592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.201949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.202013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.202236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.202298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.202551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.202614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.202855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.202917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.203158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.203225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.203532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.203588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.203833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.203888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.204125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.204184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.204435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.204491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.204711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.204770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.204984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.205043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.205224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.205283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.205547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.205606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.205801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.205860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.206121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.206182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.206420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.206478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.206725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.206796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.207012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.207072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.207329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.207401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.207572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.207629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.207877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.207935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.208164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.208226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.208502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.208563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.208832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.208891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.209113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.209172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.209423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.209484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.209703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.209761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.209973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.210037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.210294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.210367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.210591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.210650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.210866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.210926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.211140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.211197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.211440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.211493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.953 [2024-11-15 11:44:29.211686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.953 [2024-11-15 11:44:29.211751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.953 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.212005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.212058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.212260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.212324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.212533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.212585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.212783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.212836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.213094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.213149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.213399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.213452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.213725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.213784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.214094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.214154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.214401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.214454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.214730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.214790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.215042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.215093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.215275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.215340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.215624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.215676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.215858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.215911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.216149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.216201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.216413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.216466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.216642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.216696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.216918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.216971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.217168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.217221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.217433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.217485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.217663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.217713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.217939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.217988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.218211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.218261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.218472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.218523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.218758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.218806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.219000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.219054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.219294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.219356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.219547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.219597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.219795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.219844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.219999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.220067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.220269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.220326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.220507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.220553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.220704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.954 [2024-11-15 11:44:29.220751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.954 qpair failed and we were unable to recover it. 00:25:48.954 [2024-11-15 11:44:29.220971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.221016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.221200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.221257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.221468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.221496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.221596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.221625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.221772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.221800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.221899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.221927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.222053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.222087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.222207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.222234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.222335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.222364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.222495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.222523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.222620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.222648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.222777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.222805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.222930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.222958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.223083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.223111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.223230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.223258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.223405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.223433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.223575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.223602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.223742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.223769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.223881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.223907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.224051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.224076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.224197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.224225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.224319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.224348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.224491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.224516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.224640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.224664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.224788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.224813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.224914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.224940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.225026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.225051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.225136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.225162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.225284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.225316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.225429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.225454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.225563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.225588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.225677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.225702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.225824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.225849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.225971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.225996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.226078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.226103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.226270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.226300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.226434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.226459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.226610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.955 [2024-11-15 11:44:29.226636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.955 qpair failed and we were unable to recover it. 00:25:48.955 [2024-11-15 11:44:29.226735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.226760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.226887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.226912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.227034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.227060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.227173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.227198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.227289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.227323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.227427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.227451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.227571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.227597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.227735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.227760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.227869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.227900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.228024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.228055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.228182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.228212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.228368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.228394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.228488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.228513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.228627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.228652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.228735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.228759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.228886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.228916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.229023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.229053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.229152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.229180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.229270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.229299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.229438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.229463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.229551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.229576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.229664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.229689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.229773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.229804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.229949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.229974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.230085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.230111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.230204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.230229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.230322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.230357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.230475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.230500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.230589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.230613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.230726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.230751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.230865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.230889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.230984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.231009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.956 [2024-11-15 11:44:29.231126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.956 [2024-11-15 11:44:29.231155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.956 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.231284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.231328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.231429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.231459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.231557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.231582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.231661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.231685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.231784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.231809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.231922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.231953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.232073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.232098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.232187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.232211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.232321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.232350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.232468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.232497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.232648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.232676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.232769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.232798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.232892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.232920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.233014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.233043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.233153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.233180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.233310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.233365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.233468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.233497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.233604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.233631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.233782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.233810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.233902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.233935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.234035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.234062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.234183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.234217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.234336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.234365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.234506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.234533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.234654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.234681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.234808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.234834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.234961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.234988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.235084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.235111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.235216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.235244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.235400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.235428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.235546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.235572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.235696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.957 [2024-11-15 11:44:29.235721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:48.957 qpair failed and we were unable to recover it. 00:25:48.957 [2024-11-15 11:44:29.235815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-15 11:44:29.668495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-15 11:44:29.668776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-15 11:44:29.668813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-15 11:44:29.669004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-15 11:44:29.669043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-15 11:44:29.669164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-15 11:44:29.669204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-15 11:44:29.669380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-15 11:44:29.669421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-15 11:44:29.669580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-15 11:44:29.669621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-15 11:44:29.669774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-15 11:44:29.669813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-15 11:44:29.669990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-15 11:44:29.670030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-15 11:44:29.670156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-15 11:44:29.670195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-15 11:44:29.670367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-15 11:44:29.670392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-15 11:44:29.670486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-15 11:44:29.670511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.557 [2024-11-15 11:44:29.670606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.557 [2024-11-15 11:44:29.670632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.557 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.670725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.670768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.670895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.670925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.671089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.671127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.671344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.671384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.671551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.671589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.671725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.671764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.671930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.671967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.672082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.672120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.672296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.672377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.672525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.672563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.672722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.672760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.672972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.673018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.673136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.673174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.673348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.673385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.673522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.673560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.673753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.673790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.673952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.673988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.674118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.674154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.674281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.674327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.674470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.674507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.674694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.674731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.674865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.674903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.675044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.675106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.675320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.675359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.675529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.675567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.675762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.675801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.675922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.675962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.676121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.676160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.676345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.676383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.676509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.676547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.676702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.676742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.676925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.676962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.677119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.677157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.677291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.677340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.677500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.677537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.677700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.677740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.677922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.677961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.558 qpair failed and we were unable to recover it. 00:25:49.558 [2024-11-15 11:44:29.678104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.558 [2024-11-15 11:44:29.678141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.678385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.678433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.678594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.678644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.678841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.678888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.679045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.679095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.679321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.679377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.679597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.679651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.679867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.679922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.680142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.680198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.680473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.680524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.680714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.680765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.680937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.680988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.681188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.681241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.681456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.681514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.681770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.681835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.682045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.682100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.682322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.682378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.682578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.682632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.682831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.682884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.683123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.683177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.683380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.683436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.683650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.683701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.683875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.683929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.684178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.684232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.684459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.684515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.684696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.684754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.684975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.685029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.685241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.685296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.685594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.685650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.685872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.685929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.686119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.686173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.686364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.686419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.686675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.686730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.686937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.686991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.687224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.687281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.687534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.687588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.687806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.687860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.688059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.559 [2024-11-15 11:44:29.688112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.559 qpair failed and we were unable to recover it. 00:25:49.559 [2024-11-15 11:44:29.688357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.688413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.688637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.688690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.688910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.688966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.689230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.689285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.689508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.689562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.689809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.689863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.690112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.690167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.690429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.690485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.690651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.690705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.690916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.690970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.691166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.691222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.691488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.691544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.691718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.691772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.691981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.692037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.692272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.692341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.692559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.692617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.692865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.692929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.693100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.693156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.693337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.693393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.693609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.693661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.693843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.693897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.694151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.694207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.694411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.694465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.694698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.694753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.694980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.695034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.695219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.695273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.695487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.695543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.695724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.695778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.695959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.696013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.696244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.696300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.696542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.696596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.696802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.696859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.697063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.697118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.697359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.697415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.697660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.697714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.697964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.698017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.698230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.698286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.698538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.560 [2024-11-15 11:44:29.698596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.560 qpair failed and we were unable to recover it. 00:25:49.560 [2024-11-15 11:44:29.698839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.698893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.699130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.699202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.699470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.699522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.699720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.699770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.700001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.700051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.700250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.700323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.700561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.700611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.700816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.700869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.701070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.701122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.701290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.701352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.701561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.701611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.701841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.701892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.702072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.702122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.702355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.702407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.702553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.702604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.702808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.702859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.703100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.703147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.703331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.703379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.703607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.703654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.703860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.703908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.704097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.704146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.704347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.704396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.704590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.704638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.704830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.704879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.705036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.705084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.705275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.705344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.705532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.705581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.705765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.705812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.706017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.706064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.706255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.706315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.706464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.706512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.706699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.706745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.706952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.706999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.707224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.707271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.707470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.707515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.707665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.707710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.707887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.707932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.708140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.561 [2024-11-15 11:44:29.708186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.561 qpair failed and we were unable to recover it. 00:25:49.561 [2024-11-15 11:44:29.708362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.708407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.708564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.708608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.708785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.708831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.709050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.709095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.709235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.709279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.709474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.709519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.709649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.709694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.709837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.709895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.710077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.710125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.710269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.710328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.710548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.710592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.710793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.710839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.710976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.711024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.711174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.711221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.711377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.711423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.711590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.711635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.711831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.711873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.712047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.712089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.712263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.712317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.712494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.712536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.712735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.712778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.712980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.713034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.713240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.713293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.713512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.713554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.713704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.713746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.713949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.713990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.714153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.714195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.714421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.714467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.714664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.714717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.714927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.714981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.715252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.562 [2024-11-15 11:44:29.715362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.562 qpair failed and we were unable to recover it. 00:25:49.562 [2024-11-15 11:44:29.715577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.715622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.715818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.715863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.716042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.716105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.716340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.716397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.716605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.716649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.716882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.716935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.717133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.717187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.717439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.717503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.717646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.717691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.717904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.717960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.718156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.718210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.718393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.718439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.718686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.718733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.719005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.719058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.719328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.719382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.719548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.719618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.719817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.719875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.720139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.720197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.720409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.720463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.720665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.720719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.720977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.721028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.721249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.721317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.721573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.721626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.721847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.721897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.722062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.722115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.722347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.722404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.722634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.722685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.722844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.722915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.723135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.723185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.723377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.723453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.723651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.723710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.723979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.724037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.724245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.724319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.724530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.724589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.724780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.724841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.725077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.725134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.725364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.725419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.725689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.725747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.563 qpair failed and we were unable to recover it. 00:25:49.563 [2024-11-15 11:44:29.725976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.563 [2024-11-15 11:44:29.726037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.726265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.726337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.726579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.726633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.726865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.726923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.727147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.727204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.727462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.727519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.727717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.727771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.728039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.728097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.728333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.728393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.728630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.728687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.728946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.729004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.729186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.729246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.729450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.729508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.729709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.729771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.729986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.730044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.730274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.730346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.730586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.730644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.730872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.730934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.731211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.731279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.731574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.731633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.731888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.731963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.732188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.732246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.732497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.732556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.732815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.732873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.733122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.733179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.733402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.733461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.733695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.733752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.733968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.734025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.734287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.734359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.734620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.734677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.734909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.734966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.735225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.735283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.735541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.735599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.735848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.735905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.736164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.736221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.736453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.736511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.736732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.736791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.737052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.737109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.564 [2024-11-15 11:44:29.737283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.564 [2024-11-15 11:44:29.737355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.564 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.737588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.737649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.737872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.737932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.738165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.738222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.738505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.738565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.738821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.738878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.739111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.739169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.739398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.739458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.739729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.739787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.740050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.740108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.740371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.740429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.740645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.740703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.740942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.741000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.741190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.741249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.741470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.741549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.741823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.741883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.742151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.742208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.742456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.742515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.742739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.742797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.743035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.743091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.743331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.743400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.743631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.743692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.743963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.744020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.744203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.744264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.744547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.744609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.744890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.744966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.745160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.745218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.745476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.745553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.745838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.745912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.746147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.746207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.746480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.746558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.746793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.746869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.747099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.747157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.747447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.747525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.747788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.747865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.748090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.748149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.748427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.748505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.748788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.748864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.749077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.565 [2024-11-15 11:44:29.749134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.565 qpair failed and we were unable to recover it. 00:25:49.565 [2024-11-15 11:44:29.749383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.749460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.749663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.749741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.750002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.750060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.750288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.750359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.750620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.750695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.750915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.750991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.751259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.751349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.751655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.751732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.751989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.752067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.752333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.752391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.752631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.752707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.752905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.752983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.753248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.753318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.753607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.753682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.753879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.753958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.754197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.754255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.754519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.754596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.754844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.754920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.755149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.755209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.755518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.755595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.755831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.755908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.756168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.756236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.756508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.756586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.756796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.756872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.757047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.757104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.757393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.757469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.757720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.757796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.758015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.758073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.758361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.758441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.758677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.758755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.759035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.759118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.759353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.759413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.759704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.759782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.566 [2024-11-15 11:44:29.760038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.566 [2024-11-15 11:44:29.760117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.566 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.760336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.760397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.760677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.760737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.761029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.761105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.761382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.761458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.761746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.761822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.762057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.762116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.762340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.762398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.762596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.762672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.762937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.763014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.763209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.763268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.763555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.763630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.763925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.764001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.764225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.764282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.764562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.764639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.764936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.765012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.765235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.765294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.765526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.765600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.765898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.765974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.766189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.766247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.766556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.766634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.766934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.767010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.767206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.767264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.767478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.767558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.767812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.767888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.768151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.768209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.768485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.768562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.768823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.768899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.769125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.769200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.769483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.769542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.769835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.769912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.770181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.770240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.770538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.770614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.770906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.770981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.771209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.771270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.771530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.771607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.771856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.771933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.772157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.772216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.567 [2024-11-15 11:44:29.772502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.567 [2024-11-15 11:44:29.772580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.567 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.772805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.772864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.773092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.773149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.773324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.773384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.773637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.773713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.773953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.774029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.774239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.774297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.774556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.774631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.774874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.774948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.775141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.775199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.775499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.775576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.775812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.775887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.776075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.776133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.776356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.776415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.776704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.776779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.777035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.777093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.777331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.777409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.777672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.777748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.777997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.778075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.778260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.778335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.778568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.778644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.778889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.778966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.779191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.779249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.779549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.779624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.779871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.779929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.780190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.780248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.780467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.780543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.780787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.780863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.781060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.781121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.781410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.781487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.781779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.781865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.782126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.782184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.782376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.782437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.782735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.782811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.783031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.783090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.783404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.783489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.783725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.783801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.784025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.784087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.568 [2024-11-15 11:44:29.784330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.568 [2024-11-15 11:44:29.784389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.568 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.784604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.784680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.784905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.784985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.785244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.785316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.785577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.785636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.785892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.785968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.786185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.786243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.786546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.786623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.786866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.786944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.787204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.787263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.787506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.787583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.787813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.787870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.788093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.788150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.788435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.788512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.788792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.788867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.789127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.789185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.789477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.789554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.789840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.789915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.790173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.790231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.790543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.790622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.790912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.790988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.791226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.791285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.791575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.791651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.791865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.791944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.792190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.792248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.792484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.792562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.792815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.792892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.793122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.793180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.793362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.793425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.793688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.793764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.794016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.794092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.794379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.794456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.794681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.794766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.795025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.795083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.795353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.795414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.795646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.795721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.795922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.796000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.796230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.796288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.796779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.569 [2024-11-15 11:44:29.796857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.569 qpair failed and we were unable to recover it. 00:25:49.569 [2024-11-15 11:44:29.797081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.797139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.797334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.797394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.797692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.797768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.798014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.798090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.798377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.798457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.798694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.798772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.799028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.799085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.799278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.799375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.799614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.799689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.799981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.800056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.800285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.800358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.800658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.800733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.801017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.801093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.801385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.801464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.801755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.801830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.802114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.802191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.802423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.802502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.802749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.802826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.803026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.803104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.803376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.803434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.803698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.803775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.803997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.804058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.804334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.804392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.804599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.804682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.804963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.805039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.805294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.805370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.805632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.805710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.805999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.806077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.806317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.806376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.806594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.806669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.806908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.806966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.807155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.807213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.807443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.807470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.807583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.807617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.807750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.807775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.807907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.807933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.808018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.808045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.808176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.570 [2024-11-15 11:44:29.808202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.570 qpair failed and we were unable to recover it. 00:25:49.570 [2024-11-15 11:44:29.808320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.808354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.808443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.808469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.808580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.808606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.808718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.808744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.808885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.808911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.809003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.809028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.809121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.809146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.809291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.809323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.809419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.809446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.809567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.809593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.809702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.809728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.809842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.809867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.809959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.809984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.810066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.810092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.810201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.810227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.810316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.810341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.810456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.810481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.810572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.810597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.810714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.810739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.810834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.810859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.810948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.810972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.811086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.811112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.811232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.811256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.811348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.811375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.811492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.811517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.811602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.811627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.811786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.811844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.812045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.812080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.812196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.812233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.812390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.812416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.812526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.812550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.812629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.812655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.812761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.812785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.812864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.571 [2024-11-15 11:44:29.812889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.571 qpair failed and we were unable to recover it. 00:25:49.571 [2024-11-15 11:44:29.813101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.813159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.813371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.813401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.813487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.813512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.813625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.813650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.813787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.813812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.813999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.814057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.814313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.814370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.814488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.814513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.814627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.814652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.814832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.814923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.815104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.815141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.815273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.815298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.815531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.815556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.815670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.815695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.815823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.815859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.816021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.816070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.816198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.816235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.816399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.816425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.816506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.816531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.816642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.816667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.816772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.816796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.816910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.816946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.817103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.817139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.817284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.817331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.817500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.817524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.817661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.817696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.817806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.817843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.817988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.818024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.818211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.818247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.818427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.818453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.818562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.818586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.818699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.818734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.818887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.818922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.819047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.819071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.819260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.819296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.819462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.819487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.819599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.819624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.819700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.572 [2024-11-15 11:44:29.819725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.572 qpair failed and we were unable to recover it. 00:25:49.572 [2024-11-15 11:44:29.819830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.819855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.819970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.820020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.820171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.820206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.820363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.820395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.820561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.820586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.820699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.820724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.820847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.820884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.821033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.821068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.821178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.821215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.821386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.821412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.821522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.821547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.821663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.821698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.821845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.821892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.822003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.822027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.822135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.822171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.822325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.822369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.822463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.822487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.822571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.822597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.822780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.822815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.822964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.823000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.823124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.823160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.823313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.823372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.823492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.823517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.823632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.823658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.823768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.823792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.823933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.823969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.824147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.824182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.824339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.824386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.824470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.824496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.824604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.824629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.824798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.824834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.824971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.824996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.825117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.825141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.825231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.825255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.825373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.825398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.825517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.825553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.825672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.825708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.825841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.825866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.573 qpair failed and we were unable to recover it. 00:25:49.573 [2024-11-15 11:44:29.825974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.573 [2024-11-15 11:44:29.826000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.826112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.826137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.826265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.826311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.826444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.826481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.826629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.826663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.826770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.826811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.826961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.826996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.827116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.827159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.827274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.827299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.827429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.827453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.827594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.827629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.827808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.827844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.827973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.828009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.828186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.828222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.828368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.828403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.828547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.828583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.828695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.828730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.828916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.828951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.829134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.829169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.829291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.829340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.829529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.829565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.829683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.829718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.829872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.829909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.830025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.830062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.830208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.830243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.830381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.830418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.830568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.830604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.830730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.830773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.830883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.830909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.831029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.831053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.831199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.831235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.831445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.831482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.831657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.831699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.831814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.831850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.831997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.832034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.832179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.832216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.832365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.832402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.832554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.832589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.574 qpair failed and we were unable to recover it. 00:25:49.574 [2024-11-15 11:44:29.832699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.574 [2024-11-15 11:44:29.832735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.832851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.832889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.833006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.833042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.833219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.833256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.833430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.833467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.833643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.833679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.833793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.833829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.833986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.834021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.834143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.834179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.834331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.834372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.834488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.834524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.834650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.834676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.834793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.834817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.834906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.834931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.835088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.835122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.835272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.835316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.835477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.835502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.835692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.835727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.835840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.835874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.836018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.836055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.836165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.836200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.836362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.836398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.836544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.836580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.836699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.836736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.836886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.836921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.837030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.837066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.837212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.837247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.837392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.837427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.837545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.837582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.837740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.837774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.837906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.837941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.838092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.838128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.838287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.838335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.838465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.838500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.575 qpair failed and we were unable to recover it. 00:25:49.575 [2024-11-15 11:44:29.838649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.575 [2024-11-15 11:44:29.838694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.838838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.838876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.838992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.839028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.839149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.839184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.839346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.839384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.839537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.839571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.839721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.839756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.839907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.839943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.840089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.840124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.840249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.840290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.840459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.840496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.840656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.840692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.840810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.840848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.841001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.841036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.841190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.841226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.841414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.841451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.841559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.841595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.841706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.841743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.841875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.841911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.842089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.842125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.842271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.842315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.842442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.842480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.842642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.842678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.842823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.842859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.842979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.843015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.843173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.843209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.843386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.843422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.843579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.843615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.843762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.843797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.843944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.843980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.844119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.844155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.844338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.844375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.844497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.844533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.844681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.844719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.844836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.844872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.844989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.845027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.845175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.845212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.845368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.576 [2024-11-15 11:44:29.845405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.576 qpair failed and we were unable to recover it. 00:25:49.576 [2024-11-15 11:44:29.845547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.845584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.845732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.845767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.845923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.845966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.846075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.846111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.846223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.846258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.846447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.846483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.846660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.846695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.846810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.846846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.846998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.847034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.847173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.847208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.847385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.847421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.847537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.847573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.847683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.847721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.847873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.847909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.848089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.848125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.848318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.848355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.848485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.848521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.848674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.848709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.848819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.848855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.849033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.849069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.849225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.849261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.849458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.849494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.849646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.849681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.849807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.849843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.849989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.850026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.850209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.850245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.850387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.850434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.850614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.850650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.850795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.850831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.850986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.851021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.851210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.851246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.851378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.851415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.851575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.851611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.851747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.851782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.851901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.851937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.852113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.852148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.852331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.852376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.852541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.577 [2024-11-15 11:44:29.852576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.577 qpair failed and we were unable to recover it. 00:25:49.577 [2024-11-15 11:44:29.852753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.852789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.852969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.853005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.853158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.853194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.853345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.853398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.853544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.853585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.853731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.853764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.853905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.853940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.854093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.854127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.854269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.854310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.854462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.854495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.854635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.854670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.854808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.854841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.854975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.855009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.855177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.855210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.855384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.855419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.855571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.855604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.855752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.855785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.855901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.855935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.856100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.856133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.856247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.856280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.856443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.856477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.856647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.856680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.856832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.856865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.857034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.857067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.857237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.857271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.857455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.857488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.857626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.857658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.857810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.857841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.857980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.858012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.858142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.858174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.858285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.858324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.858444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.858478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.858644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.858678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.858848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.858880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.859022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.859053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.859160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.859193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.859370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.859403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.859513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.578 [2024-11-15 11:44:29.859544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.578 qpair failed and we were unable to recover it. 00:25:49.578 [2024-11-15 11:44:29.859655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.859688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.859822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.859854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.859994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.860026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.860157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.860189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.860334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.860383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.860519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.860550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.860713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.860751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.860893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.860923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.861055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.861085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.861237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.861268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.861413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.861445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.861583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.861613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.861744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.861775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.861912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.861942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.862076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.862106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.862269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.862300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.862479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.862509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.862672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.862702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.862827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.862858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.862981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.863011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.863152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.863182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.863358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.863389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.863521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.863552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.863683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.863713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.863842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.863871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.864031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.864060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.864190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.864219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.864332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.864370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.864490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.864519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.864639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.864668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.864825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.864855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.864963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.864993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.865135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.865165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.865309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.865339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.865501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.865530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.865631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.865660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.865771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.865801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.865898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.865928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.579 qpair failed and we were unable to recover it. 00:25:49.579 [2024-11-15 11:44:29.866067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.579 [2024-11-15 11:44:29.866096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.866218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.866246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.866383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.866414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.866512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.866541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.866698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.866728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.866854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.866885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.867018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.867047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.867175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.867205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.867354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.867389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.867556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.867584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.867711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.867739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.867894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.867923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.868047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.868074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.868197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.868226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.868369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.868397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.868516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.868545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.868645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.868675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.868781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.868809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.868934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.868963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.869065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.869094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.869246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.869274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.869405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.869434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.869571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.869600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.869703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.869732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.869885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.869915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.870014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.870042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.870139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.870168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.870274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.870309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.870423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.870451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.870581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.870609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.870763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.870790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.870876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.870905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.870998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.871025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.871120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.580 [2024-11-15 11:44:29.871148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.580 qpair failed and we were unable to recover it. 00:25:49.580 [2024-11-15 11:44:29.871235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.871262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.871425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.871453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.871576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.871603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.871757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.871784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.871931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.871959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.872052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.872082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.872204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.872232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.872346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.872375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.872498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.872526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.872668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.872697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.872847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.872875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.872993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.873020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.873141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.873168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.873255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.873282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.873452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.873485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.873582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.873610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.873730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.873758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.873853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.873881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.874036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.874064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.874182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.874209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.874331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.874374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.874521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.874548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.874668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.874695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.874785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.874811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.874912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.874939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.875086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.875112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.875228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.875255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.875386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.875414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.875539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.875566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.875665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.875692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.875782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.875808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.875929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.875955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.876052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.876078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.876198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.876224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.876369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.876396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.876511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.876537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.876620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.876646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.876764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.581 [2024-11-15 11:44:29.876791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.581 qpair failed and we were unable to recover it. 00:25:49.581 [2024-11-15 11:44:29.876883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.876911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.877004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.877031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.877117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.877143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.877272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.877299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.877440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.877466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.877577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.877602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.877711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.877737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.877821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.877847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.877963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.877990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.878111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.878138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.878256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.878282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.878382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.878409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.878522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.878548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.878686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.878711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.878804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.878830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.878969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.878995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.879087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.879118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.879234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.879260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.879351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.879378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.879475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.879502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.879648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.879674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.879760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.879786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.879881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.879906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.880022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.880047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.880169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.880195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.880283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.880330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.880440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.880465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.880579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.880604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.880712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.880736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.880824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.880850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.880972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.880996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.881119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.881143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.881255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.881279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.881410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.881435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.881544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.881568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.881707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.881731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.881854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.881879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.882021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.582 [2024-11-15 11:44:29.882045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.582 qpair failed and we were unable to recover it. 00:25:49.582 [2024-11-15 11:44:29.882141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.882166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.882249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.882275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.882392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.882418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.882526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.882552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.882660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.882685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.882795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.882820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.882896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.882920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.883032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.883057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.883168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.883193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.883313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.883339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.883476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.883500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.883612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.883636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.883726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.883750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.883834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.883860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.883976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.884001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.884083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.884108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.884218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.884242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.884329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.884355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.884466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.884495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.884600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.884626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.884734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.884759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.884877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.884903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.885014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.885040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.885188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.885213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.885291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.885342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.885427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.885453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.885567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.885592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.885683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.885708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.885812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.885837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.885953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.885978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.886064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.886089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.886222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.886247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.886377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.886404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.886515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.886541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.886624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.886650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.886764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.886789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.886872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.886897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.583 [2024-11-15 11:44:29.887010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.583 [2024-11-15 11:44:29.887035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.583 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.887115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.887140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.887246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.887270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.887413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.887439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.887529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.887554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.887665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.887690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.887803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.887828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.887940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.887965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.888108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.888134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.888243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.888268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.888418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.888444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.888526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.888551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.888662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.888688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.888798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.888823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.888907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.888933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.889021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.889046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.889181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.889206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.889317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.889348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.889463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.889488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.889586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.889612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.889688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.889713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.889820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.889849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.889961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.889986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.890106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.890131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.890214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.890239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.890371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.890396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.890480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.890506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.890583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.890609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.890700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.890725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.890826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.890853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.890962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.890986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.891092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.891118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.891195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.891220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.891295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.891325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.891445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.891470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.891591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.891616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.584 qpair failed and we were unable to recover it. 00:25:49.584 [2024-11-15 11:44:29.891727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.584 [2024-11-15 11:44:29.891752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.891865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.891890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.891969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.891994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.892074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.892099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.892211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.892235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.892323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.892348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.892463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.892488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.892589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.892613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.892691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.892715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.892826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.892851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.892964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.892989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.893068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.893093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.893206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.893231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.893345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.893371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.893485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.893511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.893636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.893660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.893777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.893801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.893883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.893907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.893989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.894014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.894127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.894151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.894268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.894292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.894415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.894441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.894590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.894616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.894756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.894780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.894884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.894909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.894990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.895018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.895106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.895131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.895223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.895248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.895323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.895348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.895461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.895485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.895564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.895589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.895700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.895724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.895848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.895873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.895983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.896008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.896123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.896155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.896270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.896294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.585 qpair failed and we were unable to recover it. 00:25:49.585 [2024-11-15 11:44:29.896454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.585 [2024-11-15 11:44:29.896479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.896625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.896651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.896735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.896759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.896855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.896880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.896971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.896996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.897099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.897124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.897204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.897230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.897320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.897353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.897441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.897466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.897550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.897574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.897688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.897713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.897827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.897851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.897940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.897966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.898075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.898101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.898205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.898231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.898352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.898377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.898491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.898516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.898616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.898641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.898723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.898748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.898848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.898871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.898960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.898984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.899068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.899093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.899240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.899265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.899364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.899389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.899499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.899525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.899629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.899654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.899737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.899761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.899848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.899873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.899984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.900009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.900146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.900175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.900293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.900326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.900474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.900499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.900614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.900639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.900745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.900770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.900881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.900906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.901034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.901059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.901194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.901219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.901358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.901383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.586 [2024-11-15 11:44:29.901469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.586 [2024-11-15 11:44:29.901493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.586 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.901615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.901640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.901752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.901777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.901887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.901912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.902018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.902043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.902130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.902155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.902266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.902290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.902416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.902441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.902563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.902588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.902725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.902749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.902863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.902889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.902972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.902998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.903106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.903132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.903243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.903267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.903386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.903413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.903531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.903555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.903687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.903712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.903822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.903847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.903965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.903991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.904099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.904124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.904249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.904274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.904368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.904394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.904488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.904513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.904624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.904649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.904786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.904810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.904917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.904942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.905051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.905075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.905167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.905192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.905297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.905330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.905408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.905432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.905544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.905568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.905680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.905709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.905820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.905844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.905955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.905988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.906104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.906128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.906266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.906292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.906411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.906435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.906546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.906571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.587 [2024-11-15 11:44:29.906687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.587 [2024-11-15 11:44:29.906711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.587 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.906796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.906821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.906958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.906982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.907099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.907124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.907204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.907228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.907334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.907359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.907472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.907496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.907585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.907609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.907749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.907774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.907864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.907889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.907999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.908023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.908132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.908156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.908245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.908269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.908359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.908384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.908499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.908523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.908611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.908635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.908762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.908786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.908878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.908902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.908984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.909008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.909096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.909120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.909235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.909264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.909413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.909439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.909552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.909576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.909690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.909715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.909800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.909824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.909936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.909960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.910040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.910066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.910155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.910180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.910309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.910335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.910427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.910452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.910558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.910582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.910664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.910689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.910772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.910798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.910892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.910916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.911004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.911030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.911116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.911141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.911257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.911282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.911381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.911405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.588 [2024-11-15 11:44:29.911515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.588 [2024-11-15 11:44:29.911540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.588 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.911655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.911679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.911775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.911800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.911891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.911917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.912031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.912056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.912163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.912188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.912270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.912295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.912390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.912414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.912497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.912521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.912638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.912664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.912779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.912804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.912921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.912946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.913032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.913056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.913166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.913190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.913275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.913331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.913443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.913467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.913554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.913579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.913712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.913737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.913850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.913876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.913962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.913988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.914091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.914115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.914232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.914257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.914368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.914397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.914486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.914513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.914640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.914665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.914741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.914766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.914893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.914918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.915029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.915055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.915176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.915201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.915314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.915340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.915453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.915479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.915561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.915585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.915675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.915700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.915775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.915800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.589 [2024-11-15 11:44:29.915886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.589 [2024-11-15 11:44:29.915911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.589 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.915996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.916023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.916144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.916169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.916292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.916327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.916409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.916433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.916543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.916568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.916682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.916706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.916814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.916839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.916946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.916972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.917062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.917089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.917200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.917224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.917338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.917364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.917484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.917509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.917624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.917649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.917759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.917784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.917876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.917901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.917996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.918021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.918108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.918133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.918262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.918287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.918385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.918409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.918519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.918544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.918665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.918689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.918777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.918804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.918934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.918958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.919081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.919106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.919191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.919217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.919313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.919340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.919452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.919477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.919597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.919626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.919768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.919793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.919875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.919900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.920052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.920080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.920222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.920246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.920339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.920365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.920488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.920513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.920622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.920647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.920734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.920759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.920848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.920872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.920990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.590 [2024-11-15 11:44:29.921014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.590 qpair failed and we were unable to recover it. 00:25:49.590 [2024-11-15 11:44:29.921101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.921126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.921246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.921271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.921388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.921413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.921534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.921559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.921693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.921719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.921836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.921860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.921996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.922021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.922134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.922158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.922265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.922290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.922451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.922476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.922563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.922588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.922709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.922734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.922827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.922851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.922935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.922959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.923064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.923090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.923181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.923206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.923308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.923334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.923446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.923470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.923586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.923613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.923708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.923734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.923828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.923853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.923937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.923962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.924075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.924099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.924187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.924212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.924359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.924384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.924470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.924495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.924637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.924662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.924751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.924777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.924862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.924887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.924995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.925025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.925182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.925207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.925328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.925354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.925464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.925490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.925579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.925604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.925732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.925757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.925869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.925894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.926013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.926039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.926127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.591 [2024-11-15 11:44:29.926153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.591 qpair failed and we were unable to recover it. 00:25:49.591 [2024-11-15 11:44:29.926271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.926296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.926418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.926443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.926554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.926580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.926699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.926724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.926829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.926855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.926964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.926989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.927080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.927105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.927222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.927247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.927333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.927360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.927475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.927501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.927612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.927663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.927762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.927791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.927869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.927896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.927977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.928004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.928113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.928139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.928214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.928240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.928331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.928357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.928467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.928492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.928584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.928612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.928702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.928729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.928842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.928869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.928955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.928980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.929100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.929126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.929209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.929235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.929326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.929352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.929470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.929495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.929588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.929614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.929730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.929755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.929836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.929872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.929980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.930006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.930147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.930173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.930284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.930321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.930437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.930464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.930557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.930583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.930701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.930726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.930818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.930845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.930932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.930957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.931045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.592 [2024-11-15 11:44:29.931071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.592 qpair failed and we were unable to recover it. 00:25:49.592 [2024-11-15 11:44:29.931161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.931187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.931267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.931293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.931387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.931412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.931493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.931519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.931610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.931637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.931759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.931785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.931890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.931916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.932036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.932062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.932162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.932189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.932312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.932339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.932444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.932470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.932558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.932584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.932662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.932688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.932769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.932796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.932910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.932936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.933027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.933053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.933144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.933171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.933313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.933340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.933430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.933457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.933545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.933571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.933689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.933715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.933827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.933853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.933963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.933988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.934077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.934103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.934187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.934212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.934299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.934334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.934450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.934476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.934582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.934607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.934726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.934752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.934834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.934860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.934968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.934994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.935081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.935107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.935247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.935273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.935360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.935391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.935506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.935532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.935620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.935646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.935727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.935754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.935866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.593 [2024-11-15 11:44:29.935893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.593 qpair failed and we were unable to recover it. 00:25:49.593 [2024-11-15 11:44:29.936007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.936033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.936115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.936147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.936257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.936283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.936380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.936406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.936491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.936517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.936596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.936623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.936731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.936756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.936865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.936891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.936966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.936991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.937141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.937167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.937267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.937293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.937396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.937423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.937533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.937558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.937675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.937702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.937797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.937823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.937907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.937933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.938022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.938048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.938140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.938166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.938268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.938294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.938440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.938467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.938555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.938582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.938674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.938700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.938784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.938810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.938903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.938928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.939038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.939065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.939179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.939205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.939295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.939329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.939454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.939480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.939571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.939598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.939687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.939713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.939808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.939834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.939919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.939944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.940059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.940084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.594 qpair failed and we were unable to recover it. 00:25:49.594 [2024-11-15 11:44:29.940172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.594 [2024-11-15 11:44:29.940198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.940358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.940385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.940477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.940507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.940600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.940625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.940714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.940739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.940820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.940846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.940965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.940992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.941131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.941157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.941270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.941297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.941450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.941476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.941565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.941591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.941677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.941702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.941806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.941832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.941914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.941940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.942025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.942051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.942165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.942191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.942278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.942309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.942425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.942452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.942576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.942602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.942689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.942715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.942798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.942823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.942935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.942962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.943049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.943077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.943185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.943211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.943357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.943384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.943490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.943516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.943633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.943658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.943749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.943775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.943862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.943888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.943979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.944005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.944129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.944155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.944251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.944276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.944377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.944403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.944483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.944509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.944594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.944620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.944734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.944760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.944843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.944868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.944966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.945007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.945122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.945148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.945240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.945266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.945398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.595 [2024-11-15 11:44:29.945425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.595 qpair failed and we were unable to recover it. 00:25:49.595 [2024-11-15 11:44:29.945520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.945546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.945644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.945674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.945770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.945807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.945921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.945946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.946072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.946097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.946175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.946200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.946286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.946320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.946401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.946426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.946523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.946548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.946659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.946685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.946806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.946832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.946916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.946941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.947038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.947068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.947152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.947178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.947263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.947289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.947390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.947417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.947508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.947534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.947678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.947704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.947796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.947820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.947931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.947956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.948052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.948078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.948194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.948220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.948311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.948337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.948425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.948451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.948566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.948591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.948677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.948702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.948801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.948831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.948912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.948938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.949025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.596 [2024-11-15 11:44:29.949051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.596 qpair failed and we were unable to recover it. 00:25:49.596 [2024-11-15 11:44:29.949165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-15 11:44:29.949190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-15 11:44:29.949281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-15 11:44:29.949314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.888 [2024-11-15 11:44:29.949418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.888 [2024-11-15 11:44:29.949444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.888 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.949563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.949589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.949681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.949707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.949797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.949824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.949911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.949936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.950036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.950064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.950176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.950201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.950319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.950346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.950432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.950459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.950561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.950587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.950680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.950710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.950806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.950832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.950924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.950955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.951037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.951062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.951146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.951172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.951298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.951341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.951426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.951452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.951534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.951559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.951648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.951675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.951759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.951789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.951879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.951904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.952046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.952072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.952201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.952227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.952318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.952345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.952443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.952470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.952592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.952618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.952710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-11-15 11:44:29.952736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.889 qpair failed and we were unable to recover it. 00:25:49.889 [2024-11-15 11:44:29.952817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.952842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.952928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.952956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.953038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.953079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.953173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.953199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.953312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.953340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.953459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.953485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.953572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.953599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.953685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.953711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.953826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.953857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.953944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.953971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.954064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.954090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.954185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.954212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.954299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.954335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.954427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.954453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.954538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.954564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.954675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.954701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.954782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.954808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.954904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.954930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.955043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.955068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.955159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.955185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.955267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.955292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.955414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.955441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.955529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.955556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.955651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.955681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.955799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.955824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.955907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.955932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.956021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.956046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.956139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.956164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.890 qpair failed and we were unable to recover it. 00:25:49.890 [2024-11-15 11:44:29.956255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-11-15 11:44:29.956281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.956401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.956427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.956539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.956565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.956677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.956704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.956796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.956821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.956932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.956958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.957069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.957095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.957182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.957207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.957319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.957346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.957441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.957467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.957606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.957632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.957717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.957744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.957838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.957864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.957980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.958005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.958097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.958122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.958209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.958235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.958319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.958345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.958430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.958456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.958538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.958564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.958673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.958698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.958817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.958842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.958934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.958959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.959047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.959073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.959176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.959202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.959283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.959329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.959474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.959500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.959613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-11-15 11:44:29.959638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.891 qpair failed and we were unable to recover it. 00:25:49.891 [2024-11-15 11:44:29.959726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.959751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.959844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.959871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.959956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.959982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.960074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.960107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.960222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.960247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.960334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.960361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.960444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.960470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.960552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.960577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.960659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.960690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.960779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.960804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.960918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.960943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.961046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.961071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.961186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.961211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.961314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.961341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.961420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.961447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.961560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.961585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.961668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.961695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.961799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.961825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.961907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.961934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.962025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.962052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.962166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.962192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.962277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.962313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.962433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.962460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.962555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.962580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.962663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.962688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.962775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.962801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.962895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-11-15 11:44:29.962920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.892 qpair failed and we were unable to recover it. 00:25:49.892 [2024-11-15 11:44:29.963008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.963033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.963125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.963151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.963263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.963289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.963423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.963448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.963529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.963555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.963629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.963656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.963767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.963793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.963896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.963923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.964040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.964065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.964144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.964170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.964264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.964290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.964419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.964446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.964526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.964551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.964664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.964690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.964803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.964829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.964912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.964938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.965019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.965045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.965139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.965164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.965247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.965272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.965399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.965426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.965542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.965568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.965648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.965678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.965793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.965828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.965940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.965966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.966083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.966109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.893 [2024-11-15 11:44:29.966230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.893 [2024-11-15 11:44:29.966255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.893 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.966384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.966411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.966520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.966546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.966640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.966666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.966751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.966777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.966892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.966918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.967015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.967041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.967149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.967175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.967269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.967295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.967388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.967414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.967498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.967524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.967618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.967643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.967719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.967745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.967836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.967861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.967945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.967970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.968060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.968085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.968197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.968222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.968334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.968360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.968443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.968468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.968549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.968575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.968664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.968690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.968768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.968794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.968895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.968921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.969040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.969066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.969146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.969173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.969281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.969314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.969399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.894 [2024-11-15 11:44:29.969424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.894 qpair failed and we were unable to recover it. 00:25:49.894 [2024-11-15 11:44:29.969513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.969539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.969619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.969645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.969725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.969750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.969855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.969881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.969972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.969998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.970070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.970096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.970205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.970230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.970317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.970344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.970436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.970462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.970542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.970572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.970664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.970689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.970776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.970802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.970895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.970920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.971008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.971036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.971147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.971173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.971262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.971288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.971408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.971435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.971548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.971573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.971711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.971737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.971830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.971855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.971969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.895 [2024-11-15 11:44:29.972005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.895 qpair failed and we were unable to recover it. 00:25:49.895 [2024-11-15 11:44:29.972090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.972116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.972228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.972253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.972375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.972408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.972508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.972534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.972654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.972680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.972773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.972799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.972913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.972939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.973048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.973075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.973158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.973185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.973279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.973320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.973410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.973436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.973526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.973552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.973639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.973665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.973758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.973784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.973863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.973889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.974009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.974036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.974160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.974186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.974314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.974342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.974452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.974478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.974570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.974596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.974684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.974711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.974831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.974856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.974946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.974972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.975078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.975105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.896 [2024-11-15 11:44:29.975199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.896 [2024-11-15 11:44:29.975225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.896 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.975333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.975361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.975498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.975524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.975643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.975668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.975754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.975779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.975896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.975922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.976032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.976059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.976182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.976208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.976293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.976328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.976421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.976447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.976529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.976555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.976676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.976703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.976809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.976835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.976933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.976959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.977079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.977105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.977223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.977248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.977343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.977371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.977457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.977483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.977571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.977597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.977678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.977703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.977818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.977843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.977922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.977948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.978060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.978086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.978197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.978222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.978334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.978361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.978443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.978469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.978589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.978615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.978740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.897 [2024-11-15 11:44:29.978767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.897 qpair failed and we were unable to recover it. 00:25:49.897 [2024-11-15 11:44:29.978889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.978915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.979054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.979079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.979189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.979215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.979336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.979368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.979460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.979486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.979569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.979595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.979710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.979736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.979836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.979861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.979941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.979966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.980081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.980107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.980220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.980245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.980363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.980389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.980480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.980505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.980603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.980640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.980724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.980750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.980843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.980869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.980955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.980982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.981102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.981128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.981219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.981245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.981344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.981370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.981482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.981507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.981590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.981616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.981706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.981732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.981849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.981875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.981993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.982018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.982119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.982145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.982251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.898 [2024-11-15 11:44:29.982277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.898 qpair failed and we were unable to recover it. 00:25:49.898 [2024-11-15 11:44:29.982360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.982386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.982518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.982543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.982628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.982655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.982780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.982806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.982899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.982931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.983020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.983047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.983170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.983196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.983283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.983317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.983434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.983460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.983546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.983573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.983653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.983679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.983780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.983806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.983896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.983922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.984009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.984035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.984118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.984144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.984228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.984253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.984347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.984381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.984495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.984521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.984600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.984627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.984704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.984731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.984853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.984880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.984971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.984998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.985098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.985123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.985205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.985232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.985354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.985380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.985496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.899 [2024-11-15 11:44:29.985521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.899 qpair failed and we were unable to recover it. 00:25:49.899 [2024-11-15 11:44:29.985612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.985637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.985752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.985778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.985861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.985888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.986024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.986050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.986143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.986170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.986260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.986286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.986415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.986441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.986551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.986578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.986666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.986692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.986788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.986814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.986926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.986951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.987042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.987067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.987208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.987234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.987316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.987342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.987423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.987449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.987534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.987560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.987648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.987673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.987791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.987818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.987914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.987940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.988022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.988048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.988156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.988182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.988295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.988331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.988466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.988492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.988603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.988629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.988743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.988768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.900 [2024-11-15 11:44:29.988884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.900 [2024-11-15 11:44:29.988911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.900 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.989046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.989073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.989200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.989227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.989328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.989355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.989439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.989465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.989558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.989588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.989713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.989739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.989848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.989874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.989997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.990023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.990103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.990130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.990206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.990231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.990311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.990337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.990427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.990454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.990536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.990562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.990652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.990678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.990771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.990796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.990903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.990937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.991027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.991053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.991158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.991184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.991267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.991293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.991400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.991427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.991540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.991566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.991655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.991681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.991761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.991786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.991906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.991932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.992025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.992050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.992143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.992168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.992247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.992273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.992391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.992417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.992543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.901 [2024-11-15 11:44:29.992569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.901 qpair failed and we were unable to recover it. 00:25:49.901 [2024-11-15 11:44:29.992684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.992709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.992831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.992857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.992982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.993010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.993131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.993157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.993271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.993297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.993428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.993454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.993597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.993623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.993738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.993764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.993875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.993901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.994020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.994045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.994129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.994155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.994248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.994275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.994432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.994458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.994548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.994574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.994665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.994691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.994794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.994824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.994914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.994940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.995055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.995082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.995167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.995193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.995285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.995320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.995437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.995464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.995543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.995570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.995658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.995684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.995772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.995797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.995919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.995953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.996144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.996170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.996283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.996331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.996417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.996443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.996529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.996555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.996676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.996701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.996822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.902 [2024-11-15 11:44:29.996847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.902 qpair failed and we were unable to recover it. 00:25:49.902 [2024-11-15 11:44:29.997045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.997071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.997160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.997186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.997297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.997332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.997422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.997449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.997563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.997590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.997732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.997758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.997872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.997898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.997984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.998010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.998101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.998126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.998234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.998259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.998350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.998377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.998497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.998522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.998643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.998670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.998782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.998809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.998999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.999033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.999146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.999173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.999260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.999285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.999409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.999435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.999529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.999555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.999667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.999695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.999773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.999800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:29.999947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:29.999981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:30.000169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:30.000194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:30.000287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:30.000323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:30.000400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:30.000431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:30.000526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:30.000552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:30.000667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:30.000694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:30.000804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:30.000831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:30.000939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:30.000965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:30.001081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:30.001107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:30.001236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:30.001263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:30.001367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:30.001394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:30.001530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:30.001557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:30.001645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:30.001673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:30.001759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:30.001785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.903 qpair failed and we were unable to recover it. 00:25:49.903 [2024-11-15 11:44:30.001897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.903 [2024-11-15 11:44:30.001923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.002013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.002039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.002132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.002158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.002312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.002339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.002426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.002452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.002542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.002568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.002654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.002679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.002768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.002794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.002905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.002932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.003049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.003076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.003167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.003193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.003275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.003327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.003433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.003460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.003545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.003570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.003661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.003687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.003770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.003796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.003939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.003965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.004080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.004105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.004212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.004239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.004327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.004354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.004437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.004463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.004548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.004574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.004650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.004676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.004764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.004790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.004906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.004932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.005019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.005043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.005146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.005176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.005265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.005292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.005400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.005427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.005625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.005656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.005761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.005787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.005891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.005918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.006016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.006044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.006164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.006201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.006347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.006384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.006504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.006540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.006666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.006706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.006836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.904 [2024-11-15 11:44:30.006875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.904 qpair failed and we were unable to recover it. 00:25:49.904 [2024-11-15 11:44:30.007002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.007048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.007181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.007219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.007376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.007442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.007579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.007632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.007743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.007771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.007864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.007891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.008082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.008108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.008193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.008219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.008350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.008377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.008462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.008488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.008587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.008613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.008697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.008723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.008839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.008865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.008944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.008970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.009085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.009112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.009210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.009238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.009431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.009458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.009549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.009585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.009698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.009724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.009831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.009857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.009941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.009967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.010080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.010107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.010189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.010215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.010312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.010351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.010439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.010465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.010556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.010584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.010673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.010699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.010788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.010814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.010896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.010923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.011014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.011040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.011126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.011151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.011237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.011271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.011377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.011404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.011489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.011515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.011607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.011633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.011725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.011754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.905 [2024-11-15 11:44:30.011867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.905 [2024-11-15 11:44:30.011892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.905 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.011975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.012001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.012089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.012116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.012231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.012256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.012378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.012406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.012516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.012542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.012659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.012685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.012772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.012798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.012883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.012908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.012991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.013017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.013129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.013157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.013245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.013271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.013387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.013414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.013504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.013530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.013617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.013643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.013732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.013758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.013845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.013872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.013956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.013982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.014071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.014097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.014221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.014246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.014338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.014365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.014475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.014503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.014624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.014667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.014789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.014816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.014933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.014958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.015074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.015099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.015193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.015220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.015314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.015340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.015427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.015452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.015546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.015582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.015693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.015718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.015799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.015825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.015918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.015943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.016031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.016056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.016148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.016174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.016260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.016286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.906 qpair failed and we were unable to recover it. 00:25:49.906 [2024-11-15 11:44:30.016401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.906 [2024-11-15 11:44:30.016427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.016520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.016546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.016638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.016663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.016786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.016813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.016908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.016933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.017018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.017043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.017126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.017151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.017239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.017265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.017366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.017392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.017484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.017510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.017604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.017629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.017713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.017738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.017831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.017856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.017947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.017972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.018059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.018085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.018194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.018220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.018317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.018344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.018429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.018454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.018566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.018591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.018685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.018710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.018801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.018825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.018944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.018969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.019083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.019108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.019186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.019211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.019306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.019332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.019427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.019454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.019541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.019566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.019683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.019709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.019816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.019842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.019951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.019976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.020069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.020096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.907 [2024-11-15 11:44:30.020184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.907 [2024-11-15 11:44:30.020209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.907 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.020322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.020347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.020436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.020463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.020544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.020569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.020654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.020679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.020775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.020800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.020891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.020916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.021025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.021050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.021134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.021159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.021238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.021268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.021375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.021401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.021478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.021504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.021601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.021626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.021739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.021764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.021863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.021888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.022025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.022050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.022131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.022156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.022248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.022275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.022397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.022423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.022509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.022534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.022612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.022638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.022726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.022752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.022838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.022863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.022947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.022974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.023072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.023098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.023184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.023209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.023296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.023331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.023414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.023439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.023529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.023554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.023636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.023661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.023739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.023764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.023847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.023872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.023963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.023987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.024099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.024124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.024211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.024236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.024339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.024364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.024455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.908 [2024-11-15 11:44:30.024485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.908 qpair failed and we were unable to recover it. 00:25:49.908 [2024-11-15 11:44:30.024566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.024591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.024703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.024738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.024820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.024845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.024937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.024962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.025048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.025072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.025154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.025178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.025276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.025325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.025424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.025452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.025537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.025564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.025689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.025715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.025847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.025873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.025960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.025987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.026070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.026096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.026194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.026220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.026317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.026345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.026431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.026457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.026572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.026598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.026687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.026720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.026814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.026840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.026955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.026981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.027069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.027095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.027188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.027215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.027332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.027360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.027469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.027494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.027586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.027611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.027727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.027752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.027835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.027864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.027959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.027984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.028077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.028103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.028213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.028239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.028353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.028382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.028477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.028505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.028595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.028620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.028717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.028745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.028831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.028858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.028967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.028992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.029073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.909 [2024-11-15 11:44:30.029099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.909 qpair failed and we were unable to recover it. 00:25:49.909 [2024-11-15 11:44:30.029176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.029203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.029291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.029326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.029414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.029439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.029528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.029554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.029671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.029696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.029788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.029813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.029893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.029918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.030021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.030046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.030159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.030183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.030293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.030323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.030409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.030434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.030526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.030551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.030636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.030661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.030743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.030769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.030853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.030878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.030954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.030979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.031113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.031142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.031224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.031249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.031346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.031372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.031448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.031473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.031561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.031586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.031676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.031701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.031793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.031818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.031903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.031927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.032018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.032043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.032126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.032151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.032246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.032273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.032397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.032423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.032508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.032533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.032626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.032652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.032743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.032770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.032889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.032915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.033027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.033052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.033141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.033167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.033257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.033283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.033397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.033423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.033505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.910 [2024-11-15 11:44:30.033530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.910 qpair failed and we were unable to recover it. 00:25:49.910 [2024-11-15 11:44:30.033607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.033633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.033728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.033766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.033876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.033913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.034016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.034047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.034162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.034209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.034366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.034403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.034537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.034584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.034751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.034792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.034907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.034940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.035043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.035078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.035184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.035216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.035329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.035373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.035479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.035506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.035626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.035652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.035741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.035766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.035908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.035933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.036020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.036047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.036143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.036170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.036309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.036336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.036421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.036447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.036542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.036567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.036661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.036686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.036773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.036799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.036887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.036914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.037011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.037037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.037175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.037201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.037288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.037326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.037512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.037542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.037644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.037669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.037755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.037781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.037890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.037916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.038001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.038026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.038120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.038145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.038244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.038283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.038413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.038444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.038533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.038560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.911 [2024-11-15 11:44:30.038641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.911 [2024-11-15 11:44:30.038667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.911 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.038753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.038778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.038858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.038885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.038979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.039005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.039078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.039104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.039184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.039211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.039315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.039342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.039433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.039459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.039574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.039600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.039694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.039720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.039814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.039840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.039929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.039955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.040036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.040064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.040177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.040203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.040289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.040325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.040440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.040466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.040548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.040574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.040659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.040685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.040773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.040801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.040892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.040918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.041000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.041027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.041143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.041169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.041282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.041318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.041521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.041548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.041648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.041678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.041762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.041789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.041906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.041933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.042023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.042050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.042136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.042162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.042293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.042340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.042436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.042465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.042554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.912 [2024-11-15 11:44:30.042580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.912 qpair failed and we were unable to recover it. 00:25:49.912 [2024-11-15 11:44:30.042694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.042719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.042859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.042884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.042978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.043005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.043098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.043122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.043208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.043233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.043325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.043354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.043446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.043473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.043563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.043589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.043779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.043805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.043917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.043944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.044024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.044050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.044139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.044165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.044255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.044282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.044382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.044410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.044504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.044529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.044639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.044665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.044759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.044784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.044929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.044954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.045033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.045058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.045173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.045199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.045289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.045320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.045407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.045432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.045541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.045566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.045654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.045679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.045766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.045791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.045902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.045926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.046012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.046038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.046154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.046179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.046292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.046323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.046445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.046474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.046561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.046588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.046699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.046725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.046806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.046832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.046929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.913 [2024-11-15 11:44:30.046956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.913 qpair failed and we were unable to recover it. 00:25:49.913 [2024-11-15 11:44:30.047066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.047092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.047179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.047205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.047289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.047328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.047411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.047437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.047526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.047553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.047653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.047679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.047759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.047786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.047925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.047951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.048048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.048074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.048150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.048176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.048264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.048290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.048394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.048420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.048511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.048537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.048658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.048684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.048768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.048793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.048926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.048953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.049040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.049066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.049159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.049184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.049281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.049313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.049399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.049425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.049520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.049546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.049642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.049667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.049758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.049784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.049869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.049895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.050010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.050036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.050125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.050155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.050246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.050272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.050393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.050434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.050556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.050583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.050677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.050703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.050809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.050834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.050923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.050947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.051034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.914 [2024-11-15 11:44:30.051060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.914 qpair failed and we were unable to recover it. 00:25:49.914 [2024-11-15 11:44:30.051152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.051179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.051264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.051289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.051416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.051441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.051523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.051548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.051666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.051691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.051779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.051804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.051899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.051926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.052025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.052051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.052140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.052167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.052276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.052312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.052415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.052441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.052631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.052658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.052778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.052804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.052897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.052923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.053002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.053028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.053121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.053148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.053237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.053264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.053365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.053393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.053483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.053509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.053598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.053629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.053712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.053738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.053819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.053844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.053931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.053957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.054075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.054100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.054188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.054214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.054347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.054373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.054456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.054481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.054566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.054592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.054674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.054699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.054815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.054840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.054929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.054954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.055041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.055066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.055156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.055180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.055268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.055293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.055413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.055437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.055523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.055548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.055648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.055673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.055763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.915 [2024-11-15 11:44:30.055789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.915 qpair failed and we were unable to recover it. 00:25:49.915 [2024-11-15 11:44:30.055885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.055914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.056003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.056029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.056118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.056144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.056338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.056365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.056453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.056479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.056558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.056584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.056673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.056699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.056824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.056851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.056937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.056964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.057056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.057081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.057173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.057198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.057286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.057316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.057431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.057456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.057541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.057565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.057649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.057674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.057789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.057814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.057899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.057924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.058009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.058033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.058115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.058140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.058217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.058242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.058339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.058365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.058450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.058475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.058559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.058584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.058674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.058699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.058809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.058834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.058923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.058954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.059039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.059064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.059169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.059194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.059315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.059341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.059459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.059485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.059561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.059586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.059673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.059698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.059809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.059834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.059918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.059943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.060036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.060062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.060151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.060181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.060297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.916 [2024-11-15 11:44:30.060330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.916 qpair failed and we were unable to recover it. 00:25:49.916 [2024-11-15 11:44:30.060406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.060431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.060516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.060543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.060635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.060662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.060752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.060778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.060873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.060898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.060993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.061020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.061115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.061141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.061232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.061257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.061369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.061396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.061486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.061511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.061599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.061624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.061713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.061739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.061835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.061860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.061971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.061996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.062088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.062114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.062227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.062251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.062372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.062398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.062510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.062535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.062660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.062685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.062799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.062824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.062918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.062943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.063040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.917 [2024-11-15 11:44:30.063065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.917 qpair failed and we were unable to recover it. 00:25:49.917 [2024-11-15 11:44:30.063151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.063176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.063262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.063288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.063441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.063482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.063578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.063611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.063703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.063729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.063816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.063840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.063927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.063952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.064035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.064059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.064202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.064228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.064329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.064373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.064523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.064550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.064640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.064667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.064758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.064785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.064874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.064901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.064987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.065012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.065101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.065128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.065169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bff30 (9): Bad file descriptor 00:25:49.918 [2024-11-15 11:44:30.065299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.065337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.065431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.065456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.065548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.065576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.065690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.065714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.065799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.065823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.065907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.065932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.066026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.066051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.066148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.066187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.066275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.066308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.066395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.066430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.066522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.066547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.066633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.066659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.066740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.066765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.066855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.066883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.066977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.067002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.067121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.067147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.067267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.067292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.067392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.067417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.067530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.067555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.067640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.067665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.067782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.067807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.918 [2024-11-15 11:44:30.067891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.918 [2024-11-15 11:44:30.067915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.918 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.067997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.068022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.068133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.068158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.068240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.068266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.068374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.068401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.068492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.068516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.068636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.068661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.068746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.068771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.068852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.068879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.068961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.068985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.069112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.069151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.069263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.069290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.069389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.069415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.069499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.069525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.069621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.069646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.069752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.069792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.069917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.069945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.070060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.070086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.070161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.070186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.070296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.070331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.070421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.070446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.070525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.070551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.070647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.070672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.070756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.070781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.070908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.070933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.071059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.071090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.071203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.071230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.071325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.071354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.071458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.071484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.071603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.071629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.071714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.071739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.071853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.071879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.071968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.071993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.072149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.072189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.072291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.072332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.072425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.072453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.072573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.072602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.072689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.072715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.919 qpair failed and we were unable to recover it. 00:25:49.919 [2024-11-15 11:44:30.072833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.919 [2024-11-15 11:44:30.072859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.072974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.073000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.073113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.073138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.073228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.073253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.073344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.073370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.073487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.073513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.073608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.073633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.073718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.073745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.073875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.073913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.074010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.074037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.074125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.074152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.074247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.074272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.074380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.074407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.074484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.074509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.074605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.074630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.074713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.074739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.074832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.074857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.075053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.075082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.075172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.075198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.075282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.075315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.075410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.075436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.075524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.075550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.075642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.075667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.075765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.075792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.075905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.075932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.076060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.076085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.076174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.076199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.076289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.076321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.076407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.076433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.076517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.076542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.076626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.076651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.076740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.076765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.076847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.076872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.076979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.077004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.077098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.077123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.077205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.077236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.077324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.920 [2024-11-15 11:44:30.077351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.920 qpair failed and we were unable to recover it. 00:25:49.920 [2024-11-15 11:44:30.077442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.077468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.077581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.077607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.077694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.077720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.077832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.077858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.077974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.077999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.078129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.078154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.078266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.078291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.078407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.078446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.078576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.078603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.078715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.078740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.078822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.078846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.078961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.078985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.079128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.079168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.079289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.079324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.079439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.079464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.079542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.079567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.079673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.079699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.079788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.079813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.079895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.079920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.080007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.080032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.080170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.080195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.080329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.080357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.080443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.080468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.080562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.080588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.080679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.080704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.080822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.080848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.080968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.080993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.081108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.081134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.081271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.081297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.081423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.081449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.081535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.081560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.081655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.081681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.081791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.081817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.081907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.081932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.082021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.082046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.082131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.082156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.082237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.082263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.082373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.082412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.921 [2024-11-15 11:44:30.082534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.921 [2024-11-15 11:44:30.082562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.921 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.082695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.082723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.082826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.082853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.082945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.082971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.083062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.083088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.083202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.083227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.083332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.083360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.083463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.083489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.083572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.083598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.083697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.083723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.083818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.083844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.083935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.083963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.084048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.084073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.084154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.084179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.084268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.084295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.084389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.084415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.084518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.084556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.084643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.084670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.084790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.084816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.084906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.084931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.085015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.085040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.085129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.085154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.085268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.085294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.085381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.085406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.085521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.085545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.085655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.085680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.085795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.085819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.085904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.085937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.086056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.086084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.086188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.086227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.086328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.086356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.086441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.086466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.086581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.922 [2024-11-15 11:44:30.086607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.922 qpair failed and we were unable to recover it. 00:25:49.922 [2024-11-15 11:44:30.086689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.086715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.086804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.086829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.086917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.086945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.087035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.087062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.087148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.087175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.087255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.087281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.087417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.087459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.087607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.087635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.087732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.087757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.087843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.087868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.087948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.087973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.088064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.088091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.088186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.088214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.088300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.088337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.088421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.088447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.088535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.088563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.088657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.088683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.088767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.088795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.088877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.088904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.088998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.089024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.089111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.089139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.089228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.089260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.089379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.089406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.089520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.089545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.089631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.089658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.089748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.089774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.089883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.089908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.089998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.090027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.090143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.090169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.090277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.090309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.090398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.090423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.090534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.090559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.090651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.090676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.090761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.090789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.090881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.090906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.091003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.091031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.091125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.091152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.091260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.923 [2024-11-15 11:44:30.091285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.923 qpair failed and we were unable to recover it. 00:25:49.923 [2024-11-15 11:44:30.091403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.091428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.091511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.091536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.091616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.091642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.091761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.091787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.091880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.091908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.092027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.092053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.092140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.092168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.092297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.092340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.092431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.092458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.092547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.092573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.092694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.092720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.092835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.092860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.092948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.092975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.093064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.093088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.093203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.093229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.093338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.093363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.093443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.093468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.093553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.093578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.093655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.093679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.093787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.093812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.093917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.093943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.094028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.094052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.094136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.094160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.094246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.094275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.094375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.094403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.094487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.094514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.094629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.094655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.094737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.094762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.094872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.094897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.094986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.095011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.095125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.095150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.095240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.095268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.095398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.095426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.095511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.095538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.095649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.095675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.095767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.095795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.095929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.095956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.096077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.924 [2024-11-15 11:44:30.096104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.924 qpair failed and we were unable to recover it. 00:25:49.924 [2024-11-15 11:44:30.096191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.096216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.096297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.096330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.096424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.096449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.096563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.096589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.096673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.096699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.096810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.096835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.096923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.096948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.097028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.097054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.097141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.097168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.097285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.097315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.097427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.097454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.097532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.097557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.097700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.097731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.097817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.097843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.097930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.097956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.098066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.098093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.098177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.098203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.098316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.098342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.098419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.098444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.098637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.098662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.098742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.098768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.098876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.098901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.098993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.099018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.099106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.099130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.099221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.099246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.099332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.099357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.099446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.099472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.099585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.099609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.099691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.099716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.099830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.099857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.099940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.099965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.100048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.100073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.100160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.100185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.100271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.100296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.100437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.100463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.100546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.100571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.100772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.100813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.100983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.101016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.925 qpair failed and we were unable to recover it. 00:25:49.925 [2024-11-15 11:44:30.101149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.925 [2024-11-15 11:44:30.101195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.101391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.101422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.101518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.101543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.101631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.101656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.101739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.101764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.101867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.101892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.101989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.102028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.102119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.102146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.102261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.102286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.102386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.102412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.102527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.102552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.102636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.102661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.102751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.102776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.102861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.102887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.102977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.103002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.103203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.103228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.103312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.103338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.103427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.103453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.103538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.103563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.103652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.103678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.103767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.103792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.103903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.103928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.104038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.104063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.104164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.104204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.104307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.104336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.104455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.104479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.104567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.104591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.104680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.104704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.104788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.104819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.104894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.104918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.105024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.105050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.105135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.105160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.105254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.105281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.105374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.926 [2024-11-15 11:44:30.105399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.926 qpair failed and we were unable to recover it. 00:25:49.926 [2024-11-15 11:44:30.105517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.105542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.105635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.105660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.105746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.105770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.105886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.105911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.106022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.106049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.106181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.106220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.106352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.106380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.106471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.106497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.106593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.106619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.106701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.106726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.106812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.106839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.106920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.106946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.107033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.107057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.107173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.107198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.107287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.107317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.107407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.107432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.107519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.107543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.107630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.107654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.107794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.107818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.107916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.107945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.108030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.108057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.108158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.108196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.108329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.108356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.108436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.108461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.108552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.108577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.108692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.108717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.108804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.108830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.108925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.108952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.109040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.109066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.109277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.109323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.109447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.109474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.109562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.109590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.109788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.109814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.109899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.109924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.110012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.110038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.110135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.110163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.110249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.110276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.110370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.927 [2024-11-15 11:44:30.110397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.927 qpair failed and we were unable to recover it. 00:25:49.927 [2024-11-15 11:44:30.110530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.110555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.110641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.110666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.110758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.110782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.110895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.110920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.111006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.111031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.111122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.111146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.111241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.111267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.111357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.111382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.111473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.111497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.111576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.111600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.111686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.111710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.111803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.111831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.111920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.111946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.112054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.112080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.112160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.112186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.112273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.112298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.112389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.112415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.112494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.112520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.112669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.112708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.112801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.112829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.112914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.112938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.113021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.113046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.113143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.113167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.113251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.113280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.113378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.113405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.113493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.113520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.113633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.113659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.113746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.113771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.113857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.113882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.113992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.114017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.114125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.114151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.114236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.114262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.114367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.114403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.114535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.114573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.114700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.114728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.114818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.928 [2024-11-15 11:44:30.114842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.928 qpair failed and we were unable to recover it. 00:25:49.928 [2024-11-15 11:44:30.114931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.114957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.115048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.115072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.115186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.115211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.115297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.115328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.115442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.115467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.115549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.115574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.115659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.115685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.115775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.115800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.115888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.115912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.115999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.116024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.116100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.116125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.116240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.116265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.116367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.116393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.116491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.116531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.116658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.116697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.116795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.116822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.116940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.116966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.117047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.117073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.117184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.117210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.117295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.117328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.117418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.117442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.117533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.117558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.117673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.117698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.117783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.117808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.117891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.117917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.118003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.118030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.118127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.118155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.118251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.118284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.118429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.118456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.118541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.118568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.118660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.118687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.118782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.118809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.118898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.118923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.119016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.119041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.119150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.119174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.119280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.119316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.119409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.119436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.119519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.929 [2024-11-15 11:44:30.119544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.929 qpair failed and we were unable to recover it. 00:25:49.929 [2024-11-15 11:44:30.119630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.119655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.119748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.119773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.119855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.119880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.120041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.120073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.120206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.120245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.120341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.120370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.120464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.120490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.120576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.120602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.120682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.120708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.120799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.120826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.120910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.120935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.121053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.121093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.121211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.121238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.121323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.121349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.121441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.121468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.121551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.121577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.121696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.121721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.121805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.121831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.121924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.121951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.122038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.122064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.122147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.122173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.122264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.122291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.122386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.122411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.122506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.122532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.122612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.122638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.122830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.122856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.122955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.122983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.123073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.123100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.123216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.123263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.123368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.123400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.123492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.123519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.123597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.123624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.123739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.123765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.123851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.123877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.123962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.123989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.124073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.124098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.124188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.124214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.124292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.124323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.930 [2024-11-15 11:44:30.124402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.930 [2024-11-15 11:44:30.124428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.930 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.124514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.124542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.124632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.124657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.124744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.124770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.124881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.124907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.125002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.125030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.125137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.125184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.125314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.125341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.125429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.125455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.125548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.125573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.125655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.125681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.125761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.125787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.125870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.125897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.126008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.126034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.126119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.126146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.126237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.126262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.126350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.126376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.126461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.126487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.126578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.126608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.126695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.126720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.126805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.126832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.126920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.126947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.127056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.127102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.127230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.127256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.127343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.127369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.127483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.127508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.127595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.127620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.127705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.127731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.127847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.127871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.127986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.128011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.128097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.128122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.128210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.128237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.128327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.128354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.128440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.128466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.931 qpair failed and we were unable to recover it. 00:25:49.931 [2024-11-15 11:44:30.128553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.931 [2024-11-15 11:44:30.128578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.128698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.128723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.128807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.128833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.128943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.128969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.129082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.129107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.129256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.129295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.129403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.129429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.129519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.129545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.129652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.129677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.129762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.129787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.129913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.129938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.130029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.130062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.130178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.130203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.130316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.130343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.130433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.130459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.130572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.130597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.130688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.130713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.130801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.130827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.130942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.130967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.131052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.131076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.131160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.131185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.131288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.131337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.131430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.131458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.131547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.131573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.131661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.131687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.131779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.131805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.131896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.131923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.132021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.132048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.132139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.132165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.132250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.132277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.132395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.132423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.132539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.132565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.132642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.132667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.132754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.132779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.132871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.132896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.132988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.133014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.133125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.133150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.133229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.133255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.932 [2024-11-15 11:44:30.133345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.932 [2024-11-15 11:44:30.133373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.932 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.133458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.133485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.133567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.133592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.133677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.133702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.133785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.133811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.133905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.133929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.134017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.134042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.134153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.134179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.134260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.134286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.134401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.134426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.134540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.134565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.134643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.134669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.134780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.134805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.134894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.134921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.135012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.135038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.135123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.135148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.135229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.135254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.135341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.135367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.135447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.135472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.135592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.135618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.135698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.135724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.135851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.135876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.135992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.136018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.136104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.136131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.136236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.136262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.136350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.136376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.136455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.136480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.136571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.136609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.136738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.136765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.136864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.136890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.136982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.137008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.137093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.137120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.137203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.137230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.137314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.137340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.137419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.137445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.137538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.137564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.137652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.137679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.137773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.137798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.933 [2024-11-15 11:44:30.137896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.933 [2024-11-15 11:44:30.137934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.933 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.138026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.138054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.138146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.138172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.138264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.138290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.138404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.138430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.138509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.138534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.138649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.138675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.138765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.138793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.138882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.138909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.139020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.139046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.139122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.139149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.139241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.139267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.139374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.139402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.139491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.139518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.139616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.139642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.139726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.139752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.139871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.139898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.139983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.140010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.140094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.140122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.140234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.140260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.140362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.140400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.140523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.140563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.140703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.140740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.140877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.140914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.141034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.141061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.141148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.141173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.141265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.141311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.141406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.141433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.141530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.141556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.141672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.141704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.141818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.141844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.141964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.141993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.142098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.142137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.142229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.142258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.142355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.142381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.142502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.142528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.142617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.142642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.142724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.142751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.934 [2024-11-15 11:44:30.142855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.934 [2024-11-15 11:44:30.142880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.934 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.142957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.142982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.143098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.143127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.143329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.143357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.143451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.143478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.143676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.143702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.143818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.143844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.143956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.143982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.144068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.144094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.144170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.144195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.144292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.144341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.144441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.144479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.144591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.144629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.144725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.144751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.144867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.144894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.144981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.145007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.145099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.145125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.145213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.145239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.145383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.145410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.145495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.145521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.145607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.145634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.145721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.145747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.145835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.145862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.145973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.145998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.146077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.146103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.146198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.146237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.146342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.146371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.146461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.146487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.146581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.146606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.146714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.146738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.146822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.146847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.146932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.146962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.147090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.147129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.147259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.147286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.935 [2024-11-15 11:44:30.147390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.935 [2024-11-15 11:44:30.147418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.935 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.147502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.147529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.147612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.147638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.147720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.147746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.147832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.147859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.147939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.147965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.148062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.148102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.148189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.148216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.148293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.148325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.148409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.148434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.148516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.148542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.148635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.148660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.148771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.148795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.148874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.148899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.149007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.149051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.149177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.149206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.149316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.149356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.149444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.149470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.149580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.149605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.149718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.149743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.149829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.149854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.149947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.149972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.150083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.150107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.150192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.150217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.150299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.150334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.150418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.150444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.150535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.150560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.150641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.150665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.150749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.150775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.150889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.150914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.151005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.151029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.151119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.151148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.151278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.151333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.151453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.151479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.151593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.151618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.151733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.151760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.151837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.151862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.151980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.152005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.152118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.936 [2024-11-15 11:44:30.152159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.936 qpair failed and we were unable to recover it. 00:25:49.936 [2024-11-15 11:44:30.152250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.152278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.152383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.152410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.152506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.152530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.152667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.152692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.152776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.152800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.152885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.152912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.152999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.153024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.153109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.153134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.153223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.153249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.153356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.153382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.153490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.153515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.153633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.153659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.153757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.153784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.153876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.153901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.153988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.154016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.154132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.154157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.154257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.154297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.154403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.154430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.154518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.154545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.154656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.154682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.154772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.154798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.154907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.154932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.155043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.155068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.155143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.155169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.155264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.155308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.155425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.155453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.155556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.155584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.155678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.155715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.155827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.155863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.155994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.156023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.156120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.156146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.156274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.156320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.156416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.156443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.156533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.156558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.156670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.156694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.156776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.156800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.156900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.156928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.157018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.937 [2024-11-15 11:44:30.157044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.937 qpair failed and we were unable to recover it. 00:25:49.937 [2024-11-15 11:44:30.157136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.157166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.157258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.157284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.157386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.157412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.157497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.157522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.157631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.157657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.157756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.157794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.157911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.157938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.158029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.158055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.158166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.158199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.158283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.158318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.158415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.158442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.158546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.158573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.158666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.158692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.158779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.158806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.158896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.158929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.159057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.159096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.159218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.159249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.159364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.159392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.159499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.159535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.159640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.159676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.159814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.159841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.159953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.159979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.160070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.160099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.160209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.160234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.160348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.160374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.160512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.160538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.160649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.160674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.160761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.160787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.160876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.160901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.160990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.161016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.161165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.161204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.161294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.161333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.161432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.161471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.161576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.161602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.161746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.161772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.161851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.161876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.161986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.162010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.162134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.938 [2024-11-15 11:44:30.162164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.938 qpair failed and we were unable to recover it. 00:25:49.938 [2024-11-15 11:44:30.162272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.162323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.162433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.162461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.162581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.162607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.162742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.162770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.162861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.162887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.162978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.163004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.163092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.163118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.163201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.163226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.163319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.163348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.163462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.163488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.163573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.163598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.163713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.163741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.163824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.163850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.163932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.163958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.164071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.164097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.164199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.164238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.164357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.164385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.164473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.164500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.164614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.164641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.164722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.164747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.164831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.164856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.164972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.165000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.165087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.165112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.165193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.165218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.165413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.165442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.165535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.165562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.165650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.165677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.165761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.165787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.165885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.165911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.166000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.166026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.166147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.166174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.166379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.166405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.166515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.166540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.166623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.166648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.166738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.166763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.166881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.166906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.166983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.167008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.167138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.167186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.167346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.167385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.167480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.167507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.167596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.939 [2024-11-15 11:44:30.167624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.939 qpair failed and we were unable to recover it. 00:25:49.939 [2024-11-15 11:44:30.167742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.940 [2024-11-15 11:44:30.167768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.940 qpair failed and we were unable to recover it. 00:25:49.940 [2024-11-15 11:44:30.167860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.940 [2024-11-15 11:44:30.167886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.940 qpair failed and we were unable to recover it. 00:25:49.940 [2024-11-15 11:44:30.167977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.940 [2024-11-15 11:44:30.168004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.940 qpair failed and we were unable to recover it. 00:25:49.940 [2024-11-15 11:44:30.168098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.940 [2024-11-15 11:44:30.168124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.940 qpair failed and we were unable to recover it. 00:25:49.940 [2024-11-15 11:44:30.168216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.940 [2024-11-15 11:44:30.168241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.940 qpair failed and we were unable to recover it. 00:25:49.940 [2024-11-15 11:44:30.168318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.940 [2024-11-15 11:44:30.168344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.940 qpair failed and we were unable to recover it. 00:25:49.940 [2024-11-15 11:44:30.168423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.940 [2024-11-15 11:44:30.168448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.940 qpair failed and we were unable to recover it. 00:25:49.940 [2024-11-15 11:44:30.168533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.940 [2024-11-15 11:44:30.168560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.940 qpair failed and we were unable to recover it. 00:25:49.940 [2024-11-15 11:44:30.168665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.940 [2024-11-15 11:44:30.168690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.940 qpair failed and we were unable to recover it. 00:25:49.940 [2024-11-15 11:44:30.168799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.940 [2024-11-15 11:44:30.168824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.940 qpair failed and we were unable to recover it. 00:25:49.940 [2024-11-15 11:44:30.168913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.940 [2024-11-15 11:44:30.168942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.940 qpair failed and we were unable to recover it. 00:25:49.940 [2024-11-15 11:44:30.169059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.940 [2024-11-15 11:44:30.169087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.940 qpair failed and we were unable to recover it. 00:25:49.940 [2024-11-15 11:44:30.169175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.940 [2024-11-15 11:44:30.169202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.940 qpair failed and we were unable to recover it. 00:25:49.940 [2024-11-15 11:44:30.169324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.940 [2024-11-15 11:44:30.169351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.940 qpair failed and we were unable to recover it. 00:25:49.940 [2024-11-15 11:44:30.169440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.940 [2024-11-15 11:44:30.169466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.169555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.169581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.169674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.169700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.169785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.169810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.169891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.169917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.170030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.170055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.170138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.170163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.170277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.170308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.170406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.170431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.170514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.170539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.170623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.170648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.170737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.170764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.170875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.170914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.171007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.171035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.171123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.171149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.171238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.171269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.171361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.171388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.171471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.171496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.171610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.171635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.171750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.171774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.171864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.171891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.171981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.172010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.172133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.172160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.172248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.172274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.172369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.172396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.172494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.172533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.172623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.172652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.172744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.172771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.172867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.172895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.173052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.173089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.173198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.173237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.173334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.173367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.941 qpair failed and we were unable to recover it. 00:25:49.941 [2024-11-15 11:44:30.173454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.941 [2024-11-15 11:44:30.173481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.173572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.173600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.173714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.173740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.173857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.173883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.173967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.173995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.174088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.174114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.174202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.174229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.174324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.174350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.174438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.174464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.174556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.174582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.174672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.174703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.174793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.174817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.174905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.174932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.175023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.175049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.175160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.175199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.175320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.175357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.175448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.175475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.175556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.175581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.175688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.175714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.175798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.175823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.175903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.175929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.176043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.176069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.176154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.176180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.176264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.176293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.176404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.176431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.176511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.176538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.176651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.176677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.176762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.176788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.176893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.176920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.177050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.177077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.177168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.177196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.177298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.177351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.177481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.177509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.177591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.177618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.177736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.177764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.177860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.942 [2024-11-15 11:44:30.177895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.942 qpair failed and we were unable to recover it. 00:25:49.942 [2024-11-15 11:44:30.177997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.178024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.178162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.178201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.178297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.178332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.178426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.178452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.178536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.178562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.178678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.178705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.178792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.178818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.178908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.178934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.179034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.179073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.179198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.179225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.179321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.179349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.179447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.179473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.179554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.179579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.179655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.179680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.179772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.179803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.179921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.179947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.180032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.180059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.180143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.180168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.180264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.180290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.180388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.180417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.180547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.180573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.180701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.180726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.180811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.180837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.180912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.180942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.181025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.181051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.181133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.181158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.181246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.181272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.181398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.181425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.181525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.181551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.181636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.181662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.181750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.181775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.181887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.181914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.182027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.182053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.182141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.182169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.182255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.182281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.182386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.943 [2024-11-15 11:44:30.182414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.943 qpair failed and we were unable to recover it. 00:25:49.943 [2024-11-15 11:44:30.182505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.182531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.182646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.182671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.182749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.182775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.182864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.182889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.182978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.183005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.183097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.183127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.183241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.183268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.183356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.183382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.183467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.183492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.183569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.183594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.183683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.183708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.183797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.183825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.183944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.183971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.184064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.184091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.184186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.184211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.184290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.184324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.184418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.184444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.184527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.184553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.184644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.184671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.184809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.184837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.184949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.184975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.185065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.185103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.185195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.185221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.185330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.185356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.185439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.185465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.185554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.185579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.185664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.185689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.185797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.185823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.185912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.185937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.186021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.186046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.186129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.186156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.186269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.186294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.186412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.186443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.944 [2024-11-15 11:44:30.186532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.944 [2024-11-15 11:44:30.186560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.944 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.186700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.186726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.186814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.186840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.186927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.186952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.187039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.187065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.187164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.187203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.187300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.187340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.187426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.187452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.187561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.187586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.187670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.187696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.187812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.187838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.187928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.187953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.188090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.188120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.188254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.188281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.188384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.188412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.188506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.188532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.188617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.188643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.188760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.188786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.188898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.188925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.189009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.189035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.189122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.189148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.189230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.189255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.189359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.189385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.189468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.189493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.189581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.189606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.189743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.189768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.189860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.189888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.189979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.190005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.190099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.190124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.190212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.190237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.190323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.190349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.190439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.190465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.190553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.945 [2024-11-15 11:44:30.190578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.945 qpair failed and we were unable to recover it. 00:25:49.945 [2024-11-15 11:44:30.190684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.190709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.190795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.190819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.190894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.190919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.191029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.191054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.191135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.191160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.191240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.191266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.191393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.191430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.191512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.191539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.191632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.191658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.191766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.191791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.191882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.191909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.191992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.192020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.192106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.192133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.192219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.192245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.192333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.192359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.192441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.192466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.192558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.192583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.192660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.192685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.192802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.192830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.192912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.192938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.193034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.193062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.193149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.193175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.193257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.193283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.193425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.193451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.193537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.193563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.193674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.193699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.193832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.193857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.193940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.193966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.194079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.194105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.194188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.946 [2024-11-15 11:44:30.194216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.946 qpair failed and we were unable to recover it. 00:25:49.946 [2024-11-15 11:44:30.194309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.194337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.194429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.194456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.194551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.194578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.194666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.194698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.194792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.194830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.194949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.194975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.195076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.195116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.195208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.195235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.195321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.195348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.195458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.195484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.195597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.195622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.195712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.195737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.195828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.195855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.195939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.195967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.196055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.196081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.196166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.196193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.196277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.196316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.196414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.196441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.196536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.196563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.196760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.196786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.196897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.196923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.197011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.197039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.197146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.197186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.197309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.197338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.197448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.197474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.197572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.197597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.197705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.197731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.197816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.197843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.198038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.198064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.198151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.198177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.198289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.947 [2024-11-15 11:44:30.198330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.947 qpair failed and we were unable to recover it. 00:25:49.947 [2024-11-15 11:44:30.198448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.198476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.198575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.198614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.198736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.198764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.198881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.198914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.199018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.199044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.199163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.199190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.199310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.199337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.199429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.199456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.199569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.199594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.199706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.199732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.199814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.199839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.199955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.199980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.200064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.200089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.200225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.200251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.200360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.200387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.200526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.200552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.200661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.200687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.200801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.200826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.200914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.200939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.201029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.201054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.201143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.201170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.201263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.201288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.201386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.201411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.201501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.201526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.201610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.201635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.201742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.201766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.201852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.201879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.201983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.202032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.202134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.202174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.202297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.202335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.202425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.202451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.202540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.202566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.202709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.202743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.202882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.948 [2024-11-15 11:44:30.202918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.948 qpair failed and we were unable to recover it. 00:25:49.948 [2024-11-15 11:44:30.203028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.203062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.203170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.203197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.203282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.203313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.203403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.203429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.203521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.203546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.203677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.203706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.203791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.203817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.203933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.203958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.204086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.204111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.204220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.204245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.204374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.204405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.204512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.204550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.204648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.204676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.204760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.204785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.204878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.204903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.204993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.205018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.205098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.205124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.205210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.205234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.205365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.205391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.205492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.205517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.205607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.205632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.205720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.205745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.205858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.205883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.205961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.205987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.206072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.206097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.206178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.206203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.206332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.206358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.206486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.206511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.206593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.206618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.206732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.206758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.206868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.206893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.207006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.207031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.207119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.207150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.207239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.207278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.949 qpair failed and we were unable to recover it. 00:25:49.949 [2024-11-15 11:44:30.207385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.949 [2024-11-15 11:44:30.207414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.207506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.207533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.207622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.207648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.207733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.207758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.207849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.207875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.207968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.207995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.208088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.208113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.208264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.208311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.208436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.208473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.208595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.208632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.208748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.208777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.208893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.208920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.209012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.209038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.209157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.209183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.209323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.209351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.209434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.209459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.209543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.209568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.209682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.209707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.209792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.209817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.209895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.209920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.209999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.210025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.210108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.210134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.210260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.210299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.210405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.210432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.210513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.210538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.210618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.210651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.210744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.210769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.210885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.210910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.211027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.211053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.211165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.211190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.211286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.211318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.211429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.211456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.211547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.950 [2024-11-15 11:44:30.211576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.950 qpair failed and we were unable to recover it. 00:25:49.950 [2024-11-15 11:44:30.211689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.211715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.211797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.211823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.211915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.211942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.212029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.212055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.212141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.212169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.212254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.212280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.212488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.212514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.212623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.212648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.212736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.212760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.212871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.212896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.213011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.213036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.213122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.213147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.213267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.213292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.213389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.213414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.213494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.213518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.213633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.213659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.213756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.213781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.213862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.213887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.213975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.214000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.214087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.214116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.214205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.214230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.214320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.214352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.214437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.214461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.214572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.214597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.214712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.214737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.214845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.214870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.214953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.214977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.215066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.215091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.215202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.215227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.215336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.215362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.215439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.215464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.951 [2024-11-15 11:44:30.215553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.951 [2024-11-15 11:44:30.215579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.951 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.215670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.215695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.215789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.215814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.215913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.215952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.216048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.216078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.216173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.216200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.216291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.216326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.216410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.216436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.216520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.216546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.216628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.216654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.216744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.216769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.216887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.216912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.217024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.217048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.217133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.217159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.217236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.217262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.217384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.217415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.217502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.217527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.217606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.217633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.217717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.217742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.217851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.217876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.217958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.217983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.218057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.218081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.218169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.218195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.218283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.218314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.218407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.218432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.218511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.218536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.218617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.218642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.218756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.218781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.218896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.218921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.219022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.219048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.219133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.219162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.219254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.219281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.219378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.219404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.952 qpair failed and we were unable to recover it. 00:25:49.952 [2024-11-15 11:44:30.219488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.952 [2024-11-15 11:44:30.219514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.219639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.219677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.219799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.219827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.219911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.219938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.220047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.220072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.220160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.220185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.220274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.220299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.220392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.220417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.220503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.220527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.220641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.220671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.220751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.220775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.220861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.220885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.220982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.221011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.221131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.221157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.221248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.221274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.221380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.221406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.221494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.221519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.221637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.221662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.221750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.221775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.221863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.221888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.221971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.221996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.222128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.222155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.222259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.222299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.222415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.222445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.222562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.222588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.222674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.222702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.222793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.222818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.222902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.222928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.223054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.223093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.223191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.223220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.223318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.223344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.223488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.223514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.223606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.223631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.953 qpair failed and we were unable to recover it. 00:25:49.953 [2024-11-15 11:44:30.223721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.953 [2024-11-15 11:44:30.223746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.223830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.223854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.223938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.223963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.224080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.224114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.224234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.224261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.224353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.224379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.224469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.224495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.224582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.224608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.224721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.224746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.224834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.224859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.224945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.224972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.225059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.225084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.225173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.225199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.225330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.225359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.225486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.225513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.225598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.225624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.225709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.225735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.225829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.225855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.225949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.225977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.226059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.226083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.226194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.226219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.226348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.226375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.226496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.226523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.226607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.226633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.226716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.226740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.226853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.226880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.226964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.226990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.227079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.227104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.227218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.954 [2024-11-15 11:44:30.227243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.954 qpair failed and we were unable to recover it. 00:25:49.954 [2024-11-15 11:44:30.227334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.227360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.227459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.227488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.227609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.227635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.227715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.227741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.227825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.227851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.227932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.227957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.228047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.228073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.228164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.228191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.228286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.228323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.228417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.228443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.228529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.228554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.228671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.228696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.228783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.228809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.228922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.228947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.229037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.229070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.229157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.229182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.229273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.229311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.229409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.229435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.229520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.229545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.229633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.229659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.229744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.229770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.229853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.229879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.229964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.229989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.230075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.230101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.230187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.230213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.230312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.230339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.230417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.230443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.230557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.230583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.230667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.230693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.230786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.230812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.230890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.230916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.231027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.231052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.231138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.231164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.231254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.231281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.955 [2024-11-15 11:44:30.231396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.955 [2024-11-15 11:44:30.231434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.955 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.231568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.231607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.231726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.231752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.231869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.231895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.231982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.232007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.232097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.232122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.232243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.232271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.232367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.232398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.232489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.232516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.232630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.232656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.232769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.232795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.232883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.232908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.232991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.233017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.233108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.233133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.233237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.233262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.233360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.233386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.233473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.233498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.233582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.233606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.233710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.233734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.233823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.233861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.233981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.234008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.234126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.234152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.234229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.234254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.234369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.234396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.234478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.234503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.234598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.234626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.234707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.234732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.234852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.234880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.234963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.956 [2024-11-15 11:44:30.234989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.956 qpair failed and we were unable to recover it. 00:25:49.956 [2024-11-15 11:44:30.235080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.235105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.235225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.235251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.235349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.235375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.235461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.235488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.235572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.235598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.235693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.235720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.235809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.235835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.235926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.235952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.236037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.236062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.236145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.236172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.236279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.236310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.236400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.236425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.236536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.236561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.236641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.236667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.236755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.236782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.236895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.236921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.237033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.237059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.237195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.237221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.237319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.237360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.237442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.237468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.237582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.237608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.237798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.237824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.237917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.237944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.238089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.238118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.238231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.238257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.238354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.238381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.238467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.238493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.238591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.238616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.238695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.238720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.238802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.238830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.238916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.238942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.239046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.957 [2024-11-15 11:44:30.239085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.957 qpair failed and we were unable to recover it. 00:25:49.957 [2024-11-15 11:44:30.239209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.239235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.239350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.239376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.239462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.239487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.239570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.239595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.239682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.239706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.239824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.239852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.239968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.239997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.240083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.240108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.240224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.240250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.240372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.240397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.240477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.240505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.240651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.240676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.240760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.240786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.240896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.240927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.241039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.241066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.241168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.241207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.241329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.241360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.241446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.241474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.241564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.241591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.241677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.241703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.241794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.241820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.241941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.241966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.242049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.242079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.242164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.242190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.242308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.242353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.242445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.242473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.242569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.242594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.242685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.242710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.242830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.242855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.242942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.242969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.243045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.243070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.243154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.243180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.243261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.243287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.958 [2024-11-15 11:44:30.243395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.958 [2024-11-15 11:44:30.243420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.958 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.243513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.243539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.243620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.243645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.243722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.243747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.243830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.243855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.243963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.243988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.244079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.244104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.244183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.244209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.244290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.244324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.244403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.244428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.244507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.244531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.244619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.244644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.244758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.244783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.244897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.244921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.245012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.245037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.245142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.245181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.245313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.245342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.245432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.245461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.245553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.245581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.245667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.245693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.245809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.245835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.245991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.246017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.246110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.246135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.246217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.246242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.246332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.246358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.246453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.246478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.246565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.246590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.246682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.246711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.246824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.246851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.246930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.246956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.247046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.247072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.247149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.247176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.247299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.247343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.247460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.247487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.247579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.959 [2024-11-15 11:44:30.247607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.959 qpair failed and we were unable to recover it. 00:25:49.959 [2024-11-15 11:44:30.247724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.247749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.247842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.247867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.247972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.247997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.248084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.248109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.248194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.248223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.248323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.248354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.248469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.248497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.248696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.248722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.248811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.248837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.248922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.248947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.249058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.249084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.249178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.249204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.249292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.249325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.249420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.249446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.249532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.249559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.249648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.249675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.249761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.249787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.249900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.249926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.250016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.250044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.250131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.250156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.250245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.250271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.250370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.250396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.250510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.250537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.250654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.250681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.250814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.250840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.250926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.250952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.251043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.251069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.251152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.251177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.251263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.251288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.251389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.251414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.251516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.251542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.251682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.251708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.251795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.251820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.960 [2024-11-15 11:44:30.251909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.960 [2024-11-15 11:44:30.251937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.960 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.252043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.252082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.252170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.252197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.252284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.252316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.252401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.252426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.252506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.252531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.252622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.252654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.252740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.252767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.252878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.252903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.252996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.253022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.253128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.253154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.253235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.253261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.253387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.253413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.253496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.253523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.253610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.253636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.253722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.253749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.253835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.253860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.253953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.253980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.254067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.254092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.254183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.254209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.254315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.254342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.254431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.254456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.254543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.254568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.254682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.254707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.254802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.254829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.254943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.254969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.255061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.255089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.255172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.255198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.255308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.255335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.961 [2024-11-15 11:44:30.255420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.961 [2024-11-15 11:44:30.255446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.961 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.255537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.255565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.255658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.255683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.255794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.255820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.255912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.255939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.256041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.256081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.256178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.256206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.256315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.256343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.256429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.256455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.256541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.256567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.256664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.256690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.256782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.256810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.256901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.256927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.257009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.257034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.257113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.257139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.257257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.257283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.257373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.257401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.257505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.257531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.257619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.257644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.257753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.257779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.257869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.257896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.258020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.258046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.258156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.258183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.258276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.258308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.258397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.258424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.258508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.258534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.258616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.258641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.258731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.258756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.258845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.258869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.258956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.258983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.259073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.259098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.259191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.259216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.962 qpair failed and we were unable to recover it. 00:25:49.962 [2024-11-15 11:44:30.259300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.962 [2024-11-15 11:44:30.259332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.259463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.259488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.259596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.259622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.259706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.259731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.259813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.259838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.259922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.259947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.260027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.260052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.260165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.260190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.260279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.260315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.260409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.260436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.260548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.260574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.260663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.260690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.260809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.260841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.260925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.260950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.261035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.261062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.261150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.261177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.261260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.261285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.261404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.261430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.261513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.261538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.261619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.261645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.261725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.261749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.261831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.261857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.261946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.261971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.262056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.262084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.262179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.262207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.262322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.262350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.262448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.262474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.262589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.262614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.262698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.262724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.262807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.262833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.262924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.262950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.263042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.263068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.263152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.263179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.963 qpair failed and we were unable to recover it. 00:25:49.963 [2024-11-15 11:44:30.263300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.963 [2024-11-15 11:44:30.263333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.263417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.263443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.263524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.263549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.263634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.263660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.263750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.263775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.263860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.263889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.264002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.264033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.264123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.264149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.264257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.264283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.264388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.264415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.264504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.264530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.264663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.264688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.264785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.264811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.264889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.264915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.265025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.265050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.265134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.265160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.265239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.265264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.265408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.265435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.265519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.265545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.265620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.265646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.265773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.265799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.265889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.265917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.265998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.266025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.266120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.266146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.266227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.266252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.266344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.266370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.266457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.266483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.266566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.266592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.266675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.266701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.266812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.266839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.266925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.266952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.267064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.267090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.267184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.267211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.267336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.964 [2024-11-15 11:44:30.267363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.964 qpair failed and we were unable to recover it. 00:25:49.964 [2024-11-15 11:44:30.267502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.267527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.267635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.267660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.267742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.267768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.267856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.267882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.267985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.268010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.268086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.268112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.268193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.268217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.268310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.268335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.268419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.268444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.268527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.268553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.268667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.268693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.268781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.268806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.268891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.268921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.269006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.269034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.269149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.269175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.269254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.269280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.269379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.269407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.269493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.269519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.269657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.269682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.269768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.269794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.269885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.269910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.270019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.270044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.270130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.270157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.270244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.270271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.270367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.270395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.270481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.270507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.270598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.270625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.270728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.270754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.270864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.270890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.965 [2024-11-15 11:44:30.270972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.965 [2024-11-15 11:44:30.270998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.965 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.271113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.271140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.271250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.271276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.271480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.271507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.271624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.271650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.271730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.271756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.271852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.271878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.272019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.272046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.272165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.272204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.272292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.272327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.272447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.272486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.272581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.272608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.272697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.272725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.272816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.272842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.273043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.273070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.273187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.273215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.273334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.273361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.273454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.273479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.273566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.273592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.273704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.273730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.273815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.273843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.273937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.273964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.274048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.274073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.274161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.274192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.274313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.274342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.274439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.274465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.274552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.274583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.274697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.274723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.274817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.274844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.274980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.275005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.275201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.275232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.275343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.275373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.275481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.275507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.275587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.275613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.275701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.275727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.275820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.966 [2024-11-15 11:44:30.275849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.966 qpair failed and we were unable to recover it. 00:25:49.966 [2024-11-15 11:44:30.275990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.276017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.276107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.276133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.276238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.276266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.276392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.276419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.276509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.276537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.276625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.276652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.276734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.276760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.276871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.276896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.277014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.277040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.277124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.277149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.277237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.277267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.277369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.277396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.277529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.277555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.277749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.277775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.277870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.277899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.278016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.278043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.278154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.278180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.278272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.278298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.278453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.278479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.278574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.278601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.278681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.278707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.278783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.278820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.278921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.278947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.279030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.279056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:49.967 [2024-11-15 11:44:30.279169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.967 [2024-11-15 11:44:30.279194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:49.967 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.279285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.279318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.279433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.279459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.279556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.279598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.279691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.279717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.279827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.279855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.279948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.279976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.280065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.280091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.280177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.280203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.280326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.280363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.280451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.280479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.280576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.280602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.280717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.280746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.280846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.280872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.281076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.281112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.281209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.281235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.281341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.281367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.281456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.281492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.281592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.251 [2024-11-15 11:44:30.281618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.251 qpair failed and we were unable to recover it. 00:25:50.251 [2024-11-15 11:44:30.281704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.281730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.281924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.281950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.282062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.282088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.282202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.282228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.282323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.282351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.282445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.282471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.282589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.282616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.282704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.282731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.282838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.282864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.283020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.283046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.283148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.283178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.283308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.283335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.283420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.283446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.283525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.283551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.283674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.283701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.283817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.283843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.283956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.283984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.284092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.284120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.284317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.284349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.284436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.284462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.284548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.284574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.284691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.284717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.284831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.284857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.284969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.284996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.285083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.285109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.285233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.285263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.285389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.285415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.285508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.285534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.285624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.285650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.285734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.285760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.285899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.285926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.286018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.286044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.286127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.286153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.286261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.286287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.286404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.286445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.286537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.286563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.286642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.252 [2024-11-15 11:44:30.286668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.252 qpair failed and we were unable to recover it. 00:25:50.252 [2024-11-15 11:44:30.286811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.286837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.286927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.286953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.287059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.287084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.287177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.287206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.287322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.287350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.287458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.287485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.287577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.287603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.287718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.287744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.287833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.287859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.287939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.287965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.288072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.288098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.288182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.288209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.288299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.288336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.288450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.288477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.288567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.288597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.288688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.288716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.288857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.288883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.288977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.289004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.289146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.289172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.289257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.289283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.289381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.289407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.289514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.289540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.289655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.289680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.289797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.289823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.289908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.289933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.290051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.290076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.290188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.290214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.290330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.290358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.290473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.290500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.290586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.290613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.290722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.290747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.290837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.290864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.290957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.290983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.291101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.291126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.291214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.291240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.291326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.291352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.291443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.291469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.291614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.253 [2024-11-15 11:44:30.291640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.253 qpair failed and we were unable to recover it. 00:25:50.253 [2024-11-15 11:44:30.291753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.291779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.291894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.291923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.292035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.292062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.292179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.292206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.292313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.292340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.292432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.292458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.292550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.292576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.292665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.292692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.292812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.292838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.292924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.292950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.293035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.293060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.293138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.293164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.293274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.293300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.293405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.293430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.293521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.293549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.293634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.293660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.293772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.293805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.293892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.293919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.294010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.294036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.294154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.294180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.294265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.294291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.294412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.294437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.294548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.294574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.294664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.294690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.294806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.294832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.294919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.294947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.295044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.295070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.295154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.295181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.295279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.295312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.295420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.295446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.295540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.295565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.295682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.295708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.295787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.295813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.295898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.295924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.296061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.296087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.296198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.296224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.296412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.296438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.254 [2024-11-15 11:44:30.296551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.254 [2024-11-15 11:44:30.296576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.254 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.296713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.296738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.296854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.296881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.296970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.296999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.297107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.297133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.297243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.297269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.297385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.297411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.297526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.297553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.297637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.297663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.297776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.297802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.297896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.297922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.298010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.298036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.298123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.298149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.298234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.298260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.298357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.298383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.298470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.298496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.298580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.298606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.298703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.298728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.298840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.298867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.298948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.298978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.299087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.299113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.299249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.299275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.299411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.299437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.299545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.299571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.299654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.299679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.299793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.299820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.299934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.299960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.300069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.300109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.300232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.300260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.300389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.300427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.300566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.300592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.300676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.300704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.300785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.300811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.300904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.300931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.301015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.301041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.301154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.301180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.301267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.301294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.301428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.301454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.255 [2024-11-15 11:44:30.301566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.255 [2024-11-15 11:44:30.301591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.255 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.301713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.301739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.301823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.301850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.301966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.301992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.302088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.302117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.302246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.302286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.302399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.302428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.302518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.302544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.302690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.302716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.302809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.302835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.302921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.302946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.303068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.303108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.303229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.303256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.303386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.303414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.303532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.303558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.303644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.303671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.303784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.303810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.303891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.303916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.304029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.304055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.304180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.304220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.304317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.304345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.304438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.304468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.304578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.304603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.304696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.304721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.304860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.304885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.304995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.305020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.305144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.305184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.305298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.256 [2024-11-15 11:44:30.305334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.256 qpair failed and we were unable to recover it. 00:25:50.256 [2024-11-15 11:44:30.305477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.305504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.305591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.305617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.305737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.305763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.305850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.305876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.305986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.306013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.306106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.306131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.306220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.306245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.306403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.306429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.306538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.306563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.306673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.306697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.306784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.306809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.306945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.306970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.307090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.307115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.307204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.307232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.307330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.307356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.307475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.307501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.307579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.307605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.307715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.307741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.307837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.307863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.307945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.307971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.308086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.308116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.308229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.308254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.308367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.308392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.308475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.308499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.308588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.308615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.308704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.308729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.308844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.308869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.308961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.308986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.309102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.309129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.309216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.309242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.309361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.309388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.309499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.309525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.309613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.309639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.257 qpair failed and we were unable to recover it. 00:25:50.257 [2024-11-15 11:44:30.309764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.257 [2024-11-15 11:44:30.309790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.309884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.309910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.310029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.310054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.310150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.310189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.310334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.310363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.310488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.310514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.310605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.310632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.310741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.310767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.310904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.310931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.311040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.311065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.311209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.311236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.311352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.311379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.311464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.311490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.311605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.311631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.311758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.311785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.311920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.311947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.312083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.312112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.312227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.312253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.312345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.312371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.312458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.312485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.312570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.312596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.312681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.312708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.312827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.312855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.312942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.312968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.313091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.313130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.313275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.313308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.313406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.313432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.313542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.313567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.313685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.313710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.313818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.313843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.313961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.313989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.314112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.314140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.314227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.314254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.314376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.314403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.314525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.314550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.314629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.314655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.314744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.314771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.314915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.314943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.258 [2024-11-15 11:44:30.315044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.258 [2024-11-15 11:44:30.315071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.258 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.315215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.315242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.315358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.315384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.315482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.315508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.315590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.315615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.315704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.315730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.315811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.315836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.315950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.315979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.316074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.316102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.316186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.316212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.316308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.316336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.316477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.316503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.316582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.316608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.316696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.316723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.316808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.316834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.316918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.316943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.317034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.317067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.317181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.317208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.317296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.317328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.317440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.317465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.317552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.317578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.317689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.317715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.317795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.317822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.317902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.317929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.318009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.318035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.318151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.318177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.318265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.318292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.318436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.318461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.318550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.318575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.318658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.318685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.318799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.318825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.318938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.318963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.319078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.319107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.319220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.319247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.319348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.319375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.319474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.319500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.319593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.319619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.319740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.259 [2024-11-15 11:44:30.319765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.259 qpair failed and we were unable to recover it. 00:25:50.259 [2024-11-15 11:44:30.319902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.319928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.320013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.320039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.320126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.320153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.320241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.320267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.320367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.320392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.320478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.320508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.320620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.320645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.320752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.320776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.320887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.320913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.320992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.321016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.321132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.321157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.321282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.321327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.321450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.321480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.321600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.321626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.321747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.321772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.321868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.321896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.321985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.322012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.322133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.322159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.322269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.322294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.322415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.322441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.322551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.322577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.322686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.322712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.322793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.322818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.322929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.322955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.323065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.323092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.323181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.323206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.323311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.323351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.323486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.323526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.323651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.323679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.323788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.260 [2024-11-15 11:44:30.323814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.260 qpair failed and we were unable to recover it. 00:25:50.260 [2024-11-15 11:44:30.323956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.323982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.324098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.324124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.324216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.324243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.324350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.324389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.324533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.324560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.324658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.324685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.324771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.324796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.324906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.324931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.325026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.325054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.325173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.325200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.325335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.325361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.325450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.325475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.325559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.325584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.325662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.325688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.325780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.325807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.325899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.325932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.326031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.326057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.326143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.326169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.326323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.326350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.326440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.326466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.326557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.326585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.326675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.326701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.326788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.326814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.326891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.326916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.327018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.327057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.327170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.327197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.327333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.327364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.327486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.327511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.327639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.327678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.327803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.327830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.327944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.327973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.328063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.328089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.261 [2024-11-15 11:44:30.328175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.261 [2024-11-15 11:44:30.328201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.261 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.328341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.328368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.328464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.328491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.328579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.328605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.328689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.328715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.328806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.328834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.328927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.328952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.329071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.329096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.329202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.329227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.329320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.329361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.329460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.329493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.329609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.329636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.329755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.329781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.329871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.329898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.329983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.330011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.330100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.330126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.330220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.330245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.330378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.330407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.330521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.330546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.330627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.330653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.330774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.330800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.330915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.330942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.331042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.331081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.331171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.331197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.331287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.331320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.331434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.331459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.331546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.331572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.331664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.331691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.331808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.331836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.331920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.331946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.332086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.332112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.332191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.332217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.332322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.332362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.332485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.332512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.332651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.332678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.332761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.332786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.332870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.332895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.262 [2024-11-15 11:44:30.332979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.262 [2024-11-15 11:44:30.333006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.262 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.333098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.333126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.333231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.333270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.333366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.333394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.333515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.333540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.333635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.333662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.333780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.333806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.333895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.333920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.334027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.334051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.334163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.334188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.334310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.334338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.334456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.334482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.334576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.334603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.334692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.334719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.334808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.334833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.334945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.334970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.335080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.335107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.335200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.335225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.335341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.335369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.335482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.335509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.335621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.335647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.335736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.335761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.335875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.335901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.336023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.336062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.336148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.336175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.336285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.336319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.336434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.336459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.336569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.336594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.336700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.336725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.336812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.263 [2024-11-15 11:44:30.336840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.263 qpair failed and we were unable to recover it. 00:25:50.263 [2024-11-15 11:44:30.336929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.336956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.337070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.337097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.337206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.337231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.337437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.337477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.337599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.337627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.337741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.337768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.337862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.337890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.337976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.338002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.338107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.338146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.338259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.338286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.338386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.338422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.338511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.338536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.338667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.338692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.338806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.338833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.338947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.338975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.339100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.339139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.339232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.339258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.339359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.339386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.339478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.339503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.339619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.339643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.339724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.339749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.339888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.339913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.339992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.340017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.340105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.340132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.340220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.340248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.340360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.340399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.340500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.340526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.340614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.340639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.340755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.340780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.340858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.340883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.340997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.341022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.341116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.341144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.341239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.341266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.341385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.341413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.341534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.341561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.341650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.341676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.341790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.341816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.341930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.264 [2024-11-15 11:44:30.341960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.264 qpair failed and we were unable to recover it. 00:25:50.264 [2024-11-15 11:44:30.342049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.342076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.342155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.342182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.342274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.342309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.342402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.342428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.342536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.342561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.342667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.342692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.342775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.342799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.342893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.342918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.343003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.343028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.343135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.343160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.343245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.343270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.343377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.343416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.343542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.343581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.343676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.343704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.343812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.343838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.343957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.343983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.344068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.344095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.344181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.344207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.344329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.344358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.344456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.344495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.344594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.344621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.344704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.344730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.344869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.344894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.344987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.345012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.345099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.345125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.345235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.345259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.345355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.345384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.345476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.345502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.345613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.345638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.345723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.345747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.345865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.345890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.346007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.346031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.346139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.346164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.346247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.346273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.346382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.346421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.346547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.265 [2024-11-15 11:44:30.346574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.265 qpair failed and we were unable to recover it. 00:25:50.265 [2024-11-15 11:44:30.346688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.346713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.346793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.346819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.346898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.346923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.347037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.347069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.347180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.347205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.347280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.347311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.347392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.347417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.347501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.347526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.347615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.347640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.347715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.347740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.347832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.347856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.347939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.347964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.348038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.348063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.348171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.348196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.348310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.348335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.348423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.348451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.348537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.348562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.348649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.348675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.348803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.348827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.348909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.348935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.349062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.349092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.349291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.349330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.349424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.349464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.349562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.349590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.349693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.349719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.349801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.349827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.349941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.349967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.350068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.350107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.350225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.350251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.350344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.350372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.350485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.350515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.350655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.350681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.350791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.350817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.350904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.266 [2024-11-15 11:44:30.350929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.266 qpair failed and we were unable to recover it. 00:25:50.266 [2024-11-15 11:44:30.351015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.351041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.351167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.351206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.351309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.351338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.351435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.351462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.351550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.351576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.351667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.351693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.351778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.351802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.351885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.351910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.352047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.352072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.352187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.352212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.352311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.352338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.352429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.352453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.352546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.352570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.352687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.352712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.352795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.352823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.352905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.352929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.353044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.353068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.353193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.353230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.353367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.353405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.353500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.353527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.353615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.353642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.353733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.353758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.353873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.353899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.354020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.354045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.354147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.354184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.354288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.354320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.354414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.354439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.354518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.354542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.354630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.354653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.354772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.354801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.354895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.354921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.355024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.355063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.355153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.355179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.355270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.355296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.355391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.355416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.355498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.355524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.355602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.355627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.355722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.267 [2024-11-15 11:44:30.355749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.267 qpair failed and we were unable to recover it. 00:25:50.267 [2024-11-15 11:44:30.355833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.355860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.355963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.355988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.356070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.356095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.356179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.356205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.356286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.356318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.356415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.356440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.356552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.356577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.356666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.356692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.356797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.356822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.356910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.356936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.357017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.357042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.357129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.357156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.357247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.357273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.357364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.357391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.357473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.357498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.357604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.357629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.357706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.357732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.357815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.357840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.357929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.357954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.358031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.358056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.358152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.358179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.358294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.358331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.358417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.358445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.358562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.358588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.358725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.358751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.358839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.358870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.358983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.359009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.359124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.359150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.359229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.359254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.359335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.359360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.359471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.359496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.359611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.359637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.359775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.359800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.359925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.359950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.360036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.360062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.360165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.360189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.360276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3039928 Killed "${NVMF_APP[@]}" "$@" 00:25:50.268 [2024-11-15 11:44:30.360301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.268 [2024-11-15 11:44:30.360398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.268 [2024-11-15 11:44:30.360425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.268 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.360538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.360568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.360658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:50.269 [2024-11-15 11:44:30.360684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.360780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.360805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:50.269 [2024-11-15 11:44:30.360896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.360921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:50.269 [2024-11-15 11:44:30.361010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.361035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:50.269 [2024-11-15 11:44:30.361124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.361149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.269 [2024-11-15 11:44:30.361259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.361285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.361390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.361436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.361529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.361556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.361667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.361693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.361775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.361802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.361889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.361922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.362037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.362064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.362146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.362176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.362256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.362281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.362381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.362408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.362491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.362516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.362601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.362626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.362710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.362737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.362821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.362846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.362929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.362954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.363038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.363063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.363171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.363197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.363283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.363319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.363402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.363428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.363541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.363566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.363650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.363675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.363764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.363789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.363880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.363909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.364001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.364039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.364138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.364169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.364284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.364323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.364420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.364447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.364552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.364599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.364729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.364756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.364847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.269 [2024-11-15 11:44:30.364874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.269 qpair failed and we were unable to recover it. 00:25:50.269 [2024-11-15 11:44:30.364988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.365014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.365124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.365149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.365280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.365328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.365438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.365466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.365560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.365589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.365679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.365706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.365802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.365828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.365916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.365942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.366032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.366059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.366168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.366193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.366280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.366318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3040484 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:50.270 [2024-11-15 11:44:30.366420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.366459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3040484 00:25:50.270 [2024-11-15 11:44:30.366560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.366588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.366709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.366735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.270 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3040484 ']' 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.366821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.366847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.270 [2024-11-15 11:44:30.366960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.366986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.270 [2024-11-15 11:44:30.367062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.367089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.367172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.367198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.270 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.270 [2024-11-15 11:44:30.367338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.270 [2024-11-15 11:44:30.367367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.367485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.270 [2024-11-15 11:44:30.367511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.367600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.367627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.367769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.367795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.367903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.367929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.368070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.368096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.368181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.368215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.368312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.368340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.368533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.368559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.368637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.270 [2024-11-15 11:44:30.368667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.270 qpair failed and we were unable to recover it. 00:25:50.270 [2024-11-15 11:44:30.368753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.368779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.368868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.368894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.368974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.369001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.369088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.369114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.369231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.369258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.369362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.369389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.369481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.369507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.369621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.369647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.369732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.369759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.369947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.369973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.370090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.370117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.370246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.370285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.370414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.370441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.370532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.370557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.370647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.370673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.370781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.370806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.370892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.370917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.371013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.371041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.371124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.371150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.371240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.371267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.371366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.371393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.371480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.371506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.371591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.371617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.371713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.371739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.371821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.371846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.371977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.372015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.372136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.372173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.372283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.372328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.372427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.271 [2024-11-15 11:44:30.372454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.271 qpair failed and we were unable to recover it. 00:25:50.271 [2024-11-15 11:44:30.372575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.372601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.372680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.372705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.372792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.372817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.372909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.372935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.373046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.373072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.373184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.373209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.373329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.373362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.373455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.373486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.373572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.373599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.373678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.373705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.373797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.373823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.373917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.373944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.374054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.374081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.374202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.374241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.374358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.374397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.374491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.374518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.374602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.374627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.374710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.374735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.374826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.374854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.374938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.374965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.375052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.375079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.375180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.375206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.375289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.375330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.375419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.375445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.375525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.375550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.375634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.375661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.375743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.375769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.375881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.375907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.376002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.376031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.376111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.376138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.376229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.376256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.376384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.376411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.376499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.376525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.376615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.376641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.376753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.376788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.376880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.376906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.376983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.272 [2024-11-15 11:44:30.377009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.272 qpair failed and we were unable to recover it. 00:25:50.272 [2024-11-15 11:44:30.377092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.377117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.377225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.377250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.377337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.377367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.377457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.377483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.377564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.377591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.377679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.377706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.377791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.377819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.377933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.377959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.378045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.378073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.378165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.378193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.378281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.378313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.378408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.378434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.378522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.378549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.378634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.378660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.378772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.378797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.378894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.378921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.379004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.379029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.379113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.379141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.379225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.379252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.379353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.379379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.379458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.379484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.379572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.379597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.379708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.379735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.379869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.379894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.379982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.380010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.380095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.380122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.380236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.380263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.380351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.380377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.380460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.380485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.380577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.380602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.380721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.380747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.380861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.380886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.380971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.380996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.381086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.381111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.381223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.381248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.381334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.381360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.381433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.381458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.381541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.381566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.273 [2024-11-15 11:44:30.381681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.273 [2024-11-15 11:44:30.381706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.273 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.381842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.381867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.381976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.382001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.382114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.382153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.382255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.382282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.382379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.382405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.382520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.382546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.382657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.382683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.382805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.382830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.382917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.382942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.383073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.383103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.383197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.383224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.383311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.383339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.383432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.383459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.383571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.383596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.383717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.383742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.383828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.383855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.383964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.383989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.384099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.384127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.384208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.384234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.384327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.384356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.384486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.384512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.384627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.384653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.384740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.384766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.384854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.384881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.384967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.384995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.385140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.385170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.385288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.385318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.385403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.385429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.385520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.385545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.385671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.385696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.385786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.385811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.385893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.385918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.386028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.386053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.386155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.386181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.386269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.386294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.386394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.386419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.386502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.386527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.274 [2024-11-15 11:44:30.386603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.274 [2024-11-15 11:44:30.386628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.274 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.386711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.386736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.386848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.386873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.386963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.386990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.387076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.387101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.387196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.387220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.387313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.387339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.387424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.387449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.387534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.387559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.387651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.387676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.387764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.387789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.387877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.387904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.387989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.388014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.388102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.388127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.388215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.388240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.388333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.388363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.388451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.388477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.388555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.388581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.388670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.388696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.388814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.388853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.388949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.388979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.389074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.389101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.389189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.389217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.389318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.389358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.389478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.389505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.389589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.389615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.389701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.389727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.389803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.389829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.389919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.389946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.390041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.390069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.390165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.390191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.390287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.390327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.390418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.390443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.390536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.275 [2024-11-15 11:44:30.390561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.275 qpair failed and we were unable to recover it. 00:25:50.275 [2024-11-15 11:44:30.390670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.390696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.390789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.390816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.390898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.390923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.391033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.391058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.391142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.391167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.391255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.391281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.391415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.391445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.391536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.391564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.391648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.391680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.391762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.391789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.391873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.391900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.391986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.392014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.392129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.392155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.392244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.392269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.392385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.392411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.392515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.392540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.392631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.392656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.392760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.392785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.392872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.392898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.392980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.393005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.393090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.393115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.393204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.393232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.393378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.393414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.393534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.393562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.393645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.393681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.393770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.393795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.393887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.393920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.394007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.394035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.394119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.394144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.394225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.394250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.394364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.394391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.394527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.394556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.394652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.394678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.394770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.394796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.394878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.394904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.394993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.395023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.395134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.395162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.276 [2024-11-15 11:44:30.395247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.276 [2024-11-15 11:44:30.395273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.276 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.395369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.395395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.395478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.395503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.395586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.395611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.395701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.395727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.395837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.395862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.395977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.396005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.396093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.396124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.396212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.396238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.396362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.396389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.396473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.396499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.396594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.396621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.396709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.396735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.396813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.396839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.396921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.396947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.397035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.397063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.397150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.397176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.397291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.397326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.397453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.397479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.397564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.397590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.397678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.397704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.397790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.397816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.397905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.397931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.398050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.398075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.398199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.398238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.398348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.398384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.398480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.398519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.398617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.398644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.398729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.398754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.398844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.398870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.398955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.398980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.399113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.399138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.399214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.399240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.399329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.399355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.399436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.399461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.399546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.399571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.399646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.399671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.399744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.399769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.399851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.277 [2024-11-15 11:44:30.399881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.277 qpair failed and we were unable to recover it. 00:25:50.277 [2024-11-15 11:44:30.399965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.399990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.400075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.400100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.400211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.400236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.400327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.400353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.400439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.400464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.400540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.400566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.400649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.400674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.400789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.400816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.400897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.400922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.401010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.401039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.401130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.401156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.401243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.401269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.401385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.401412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.401506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.401532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.401628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.401654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.401763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.401790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.401874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.401900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.401995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.402024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.402103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.402129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.402219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.402243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.402331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.402357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.402435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.402459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.402571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.402597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.402685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.402710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.402799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.402823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.402913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.402938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.403046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.403076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.403199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.403224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.403335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.403361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.403449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.403476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.403554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.403579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.403684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.403709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.403798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.403826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.403919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.403946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.404040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.404066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.404152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.404178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.404296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.404328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.404417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.404443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.278 qpair failed and we were unable to recover it. 00:25:50.278 [2024-11-15 11:44:30.404532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.278 [2024-11-15 11:44:30.404558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.404676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.404702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.404802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.404828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.404925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.404951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.405045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.405084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.405172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.405200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.405316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.405343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.405425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.405450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.405568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.405594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.405683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.405708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.405796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.405823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.405912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.405938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.406020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.406048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.406136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.406162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.406245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.406271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.406390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.406422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.406507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.406534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.406650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.406676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.406766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.406792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.406879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.406905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.406996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.407023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.407119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.407158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.407248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.407275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.407377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.407404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.407515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.407541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.407620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.407647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.407734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.407760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.407841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.407868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.407962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.408002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.408108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.408135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.408245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.408271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.408390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.408416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.408502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.408528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.408638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.408663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.408751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.408778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.408863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.408890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.409028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.409054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.279 qpair failed and we were unable to recover it. 00:25:50.279 [2024-11-15 11:44:30.409149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.279 [2024-11-15 11:44:30.409174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.409255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.409280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.409378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.409406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.409499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.409527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.409625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.409651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.409742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.409767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.409850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.409875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.409971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.410009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.410131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.410159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.410270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.410297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.410398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.410434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.410535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.410563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.410677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.410703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.410810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.410836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.410924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.410950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.411041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.411068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.411187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.411214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.411332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.411358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.411444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.411469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.411566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.411591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.411697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.411722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.411806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.411834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.411949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.411976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.412083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.412109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.412187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.412212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.412323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.412350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.412442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.412468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.412558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.412583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.412669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.412694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.412807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.412832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.412920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.412947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.413063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.413091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.413203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.413231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.413334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.413361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.280 qpair failed and we were unable to recover it. 00:25:50.280 [2024-11-15 11:44:30.413446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.280 [2024-11-15 11:44:30.413472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.413557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.413582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.413700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.413726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.413816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.413841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.413933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.413958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.414047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.414073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.414185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.414210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.414296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.414327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.414411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.414437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.414522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.414548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.414635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.414661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.414778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.414809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.414896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.414921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.415058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.415083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.415197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.415225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.415326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.415365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.415457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.415484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.415603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.415628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.415719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.415744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.415852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.415877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.415962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.415987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.416092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.416132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.416250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.416278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.416319] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:25:50.281 [2024-11-15 11:44:30.416373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.416400] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:[2024-11-15 11:44:30.416400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b95 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type0 with addr=10.0.0.2, port=4420 00:25:50.281 =auto ] 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.416504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.416529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.416651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.416675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.416782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.416806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.416917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.416941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.417031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.417056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.417141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.417166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.417255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.417280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.417385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.417412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.417501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.417526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.417669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.417695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.417788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.417813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.417898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.417924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.418009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.418037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.281 qpair failed and we were unable to recover it. 00:25:50.281 [2024-11-15 11:44:30.418120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.281 [2024-11-15 11:44:30.418147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.418278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.418339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.418445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.418475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.418564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.418592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.418687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.418715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.418830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.418857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.418951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.418977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.419063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.419092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.419179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.419205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.419325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.419351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.419432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.419458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.419545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.419571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.419709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.419735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.419823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.419853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.419947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.419976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.420057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.420083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.420232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.420259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.420369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.420395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.420483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.420510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.420594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.420619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.420713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.420740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.420829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.420854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.420951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.420979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.421070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.421097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.421183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.421209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.421293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.421330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.421451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.421478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.421572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.421598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.421684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.421711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.421798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.421824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.421909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.421935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.422019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.422045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.422130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.422156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.422240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.422266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.422375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.422403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.422515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.422541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.422659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.422688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.422774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.422800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.282 [2024-11-15 11:44:30.422892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.282 [2024-11-15 11:44:30.422918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.282 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.423002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.423029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.423115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.423146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.423234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.423260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.423357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.423386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.423461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.423487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.423613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.423638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.423745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.423771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.423879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.423905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.424020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.424046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.424167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.424193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.424285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.424320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.424435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.424460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.424551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.424576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.424660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.424687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.424774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.424799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.424916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.424942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.425032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.425057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.425167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.425192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.425285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.425316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.425409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.425437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.425528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.425554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.425678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.425704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.425820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.425847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.425938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.425963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.426062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.426088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.426175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.426201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.426316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.426342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.426427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.426453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.426549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.426574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.426683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.426710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.426789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.426815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.426912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.426939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.427044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.427069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.427149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.427174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.427254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.427280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.427378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.427404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.427496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.427521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.283 [2024-11-15 11:44:30.427637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.283 [2024-11-15 11:44:30.427662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.283 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.427781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.427813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.427932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.427958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.428052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.428079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.428218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.428249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.428344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.428371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.428478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.428504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.428599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.428626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.428744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.428769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.428855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.428881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.428960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.428985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.429095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.429120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.429222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.429261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.429361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.429389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.429522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.429549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.429629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.429656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.429749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.429775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.429887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.429913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.430021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.430059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.430184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.430211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.430330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.430357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.430434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.430460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.430575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.430602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.430695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.430721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.430807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.430835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.430926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.430952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.431066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.431092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.431185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.431211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.431321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.431347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.431433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.431459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.431538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.431564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.431662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.431688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.431772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.431798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.284 [2024-11-15 11:44:30.431882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.284 [2024-11-15 11:44:30.431908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.284 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.431993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.432019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.432100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.432127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.432210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.432236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.432346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.432385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.432475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.432502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.432615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.432640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.432731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.432756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.432872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.432899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.432988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.433013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.433094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.433122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.433209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.433241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.433333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.433362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.433455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.433481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.433591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.433618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.433704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.433730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.433817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.433842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.433933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.433959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.434054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.434082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.434163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.434189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.434277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.434312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.434423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.434448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.434536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.434562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.434654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.434681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.434765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.434792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.434917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.434944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.435025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.435051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.435148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.435173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.435259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.435284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.435412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.435439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.435521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.435547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.435633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.435658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.435735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.435760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.435846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.435873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.435990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.436016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.436126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.436151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.436233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.436258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.436355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.436384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.436473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-11-15 11:44:30.436508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.285 qpair failed and we were unable to recover it. 00:25:50.285 [2024-11-15 11:44:30.436590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.436616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.436748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.436774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.436858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.436885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.436968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.436995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.437073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.437099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.437211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.437237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.437317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.437342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.437424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.437449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.437557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.437582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.437660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.437685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.437794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.437819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.437933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.437957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.438073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.438100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.438220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.438247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.438347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.438373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.438494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.438520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.438618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.438645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.438728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.438754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.438842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.438867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.438953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.438978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.439113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.439139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.439228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.439254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.439351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.439377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.439463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.439489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.439568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.439593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.439702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.439727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.439844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.439870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.439947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.439973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.440063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.440092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.440173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.440200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.440297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.440329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.440417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.440444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.440534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.440560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.440643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.440671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.440782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.440809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.440892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.440918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.441016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.441042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.441150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.441176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.286 qpair failed and we were unable to recover it. 00:25:50.286 [2024-11-15 11:44:30.441265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-11-15 11:44:30.441290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.441386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.441412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.441506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.441533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.441620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.441647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.441735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.441760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.441840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.441867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.441951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.441977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.442065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.442092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.442179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.442204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.442287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.442323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.442441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.442467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.442559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.442588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.442675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.442703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.442815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.442841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.442931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.442957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.443042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.443068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.443162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.443189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.443274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.443313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.443404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.443430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.443521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.443546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.443642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.443667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.443756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.443781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.443896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.443924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.444016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.444042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.444128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.444156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.444249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.444275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.444395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.444422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.444506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.444533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.444629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.444667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.444759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.444786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.444873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.444898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.444984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.445015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.445099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.445125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.445234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.445259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.445348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.445376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.445478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.445517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.445607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.445634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.445725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.445753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.287 [2024-11-15 11:44:30.445847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.287 [2024-11-15 11:44:30.445873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.287 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.445957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.445984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.446077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.446103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.446191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.446219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.446313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.446339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.446429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.446455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.446537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.446563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.446655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.446680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.446790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.446815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.446906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.446935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.447053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.447080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.447196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.447223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.447313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.447340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.447425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.447451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.447541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.447568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.447662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.447689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.447784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.447810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.447908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.447934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.448047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.448074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.448164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.448189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.448317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.448348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.448435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.448461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.448548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.448574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.448719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.448745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.448854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.448881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.448975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.449000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.449110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.449136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.449225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.449251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.449351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.449378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.449469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.449495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.449576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.449608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.449726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.449753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.449836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.449863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.449952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.449978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.450056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.450083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.450161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.450187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.450275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.450301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.450398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.450424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.288 [2024-11-15 11:44:30.450542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.288 [2024-11-15 11:44:30.450568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.288 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.450652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.450678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.450757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.450783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.450895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.450921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.451002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.451027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.451155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.451194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.451297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.451333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.451425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.451453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.451540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.451565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.451665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.451691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.451808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.451833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.451943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.451970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.452061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.452086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.452175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.452200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.452278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.452312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.452398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.452425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.452517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.452545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.452659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.452686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.452770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.452795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.452891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.452917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.453005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.453032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.453124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.453150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.453238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.453264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.453363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.453390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.453487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.453525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.453615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.453642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.453740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.453766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.453860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.453885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.453992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.454017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.454106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.454133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.454249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.454275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.454381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.454407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.454494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.454527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.454623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.454649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.289 [2024-11-15 11:44:30.454744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.289 [2024-11-15 11:44:30.454769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.289 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.454884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.454911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.455005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.455031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.455122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.455148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.455235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.455260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.455402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.455440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.455536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.455562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.455649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.455676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.455763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.455789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.455869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.455896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.456001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.456027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.456114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.456142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.456241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.456267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.456386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.456413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.456499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.456524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.456638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.456663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.456749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.456774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.456888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.456914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.457036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.457062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.457174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.457199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.457284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.457327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.457411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.457438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.457534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.457561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.457677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.457703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.457791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.457817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.457913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.457945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.458043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.458070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.458176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.458202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.458285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.458316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.458404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.458430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.458515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.458541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.458639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.458670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.458764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.458790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.458878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.458904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.458991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.459018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.459134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.459160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.459247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.459273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.459368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.459396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.459508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.290 [2024-11-15 11:44:30.459535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.290 qpair failed and we were unable to recover it. 00:25:50.290 [2024-11-15 11:44:30.459660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.459687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.459773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.459803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.459887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.459912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.459997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.460023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.460132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.460158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.460255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.460295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.460405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.460432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.460519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.460544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.460642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.460668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.460787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.460812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.460896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.460921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.461011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.461037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.461146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.461172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.461255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.461284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.461417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.461443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.461554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.461580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.461692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.461718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.461797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.461822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.461901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.461926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.462019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.462046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.462157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.462181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.462287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.462318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.462404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.462430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.462510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.462535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.462630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.462657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.462794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.462820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.462913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.462946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.463028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.463054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.463147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.463173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.463267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.463293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.463427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.463453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.463537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.463563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.463653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.463679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.463789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.463816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.463906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.463933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.464043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.464082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.464175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.464201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.464321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.291 [2024-11-15 11:44:30.464347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.291 qpair failed and we were unable to recover it. 00:25:50.291 [2024-11-15 11:44:30.464427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.464452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.464544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.464569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.464670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.464695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.464784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.464809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.464890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.464915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.465042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.465069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.465161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.465190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.465274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.465300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.465394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.465420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.465536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.465561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.465645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.465670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.465762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.465789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.465873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.465899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.466009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.466035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.466119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.466144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.466234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.466266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.466368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.466395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.466483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.466509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.466607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.466633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.466719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.466746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.466840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.466866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.466949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.466976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.467056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.467081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.467160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.467185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.467265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.467291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.467383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.467410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.467530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.467557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.467643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.467670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.467763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.467788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.467911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.467936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.468051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.468078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.468215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.468241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.468325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.468352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.468429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.468455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.468567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.468593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.468708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.468735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.468855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.468881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.468967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.468993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.469109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.469134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.469247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.469272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.292 qpair failed and we were unable to recover it. 00:25:50.292 [2024-11-15 11:44:30.469358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.292 [2024-11-15 11:44:30.469385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.469475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.469500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.469623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.469649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.469734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.469759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.469847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.469875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.469959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.469985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.470071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.470097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.470190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.470216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.470307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.470333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.470417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.470443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.470553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.470579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.470670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.470696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.470843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.470869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.470951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.470978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.471065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.471092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.471192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.471222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.471332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.471358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.471469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.471495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.471620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.471646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.471779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.471804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.471895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.471923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.472016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.472042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.472160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.472199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.472321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.472349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.472462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.472487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.472596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.472621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.472733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.472759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.472872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.472897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.473007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.473033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.473150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.473176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.473265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.473292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.473388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.473416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.473500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.473526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.473610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.473636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.473722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.473749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.473839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.293 [2024-11-15 11:44:30.473865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.293 qpair failed and we were unable to recover it. 00:25:50.293 [2024-11-15 11:44:30.473952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.473978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.474065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.474091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.474212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.474240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.474339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.474365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.474451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.474476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.474570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.474595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.474672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.474703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.474815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.474842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.474928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.474954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.475045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.475070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.475180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.475206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.475292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.475324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.475412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.475437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.475525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.475551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.475642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.475667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.475779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.475805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.475894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.475923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.476005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.476031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.476117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.476142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.476226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.476252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.476365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.476404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.476496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.476523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.476634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.476659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.476751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.476778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.476870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.476896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.476987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.477013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.477102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.477130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.477219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.477245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.477346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.477373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.477487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.477513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.477632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.477659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.477771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.477797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.477886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.477912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.478055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.478081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.478170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.478195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.478282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.478317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.478426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.478452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.478568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.478594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.478694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.478719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.478804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.478829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.478911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.294 [2024-11-15 11:44:30.478936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.294 qpair failed and we were unable to recover it. 00:25:50.294 [2024-11-15 11:44:30.479019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.479044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.479124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.479149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.479229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.479254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.479351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.479377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.479487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.479512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.479600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.479628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.479724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.479750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.479839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.479865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.479974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.480000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.480098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.480124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.480239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.480265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.480382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.480409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.480513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.480552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.480673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.480700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.480788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.480814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.480911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.480938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.481049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.481075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.481189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.481217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.481310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.481336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.481425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.481451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.481534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.481559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.481675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.481700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.481784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.481810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.481922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.481948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.482034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.482062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.482137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.482164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.482254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.482289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.482393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.482418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.482503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.482528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.482621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.482646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.482735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.482760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.482847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.482872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.482962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.482988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.483075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.483103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.483186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.483212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.483332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.483360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.483446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.483472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.483560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.483587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.483697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.483723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.483816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.483841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.483937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.483963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.295 [2024-11-15 11:44:30.484052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.295 [2024-11-15 11:44:30.484080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.295 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.484199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.484225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.484333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.484361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.484449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.484474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.484560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.484585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.484681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.484706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.484819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.484845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.484956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.484981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.485064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.485090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.485174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.485199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.485317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.485342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.485437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.485464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.485551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.485576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.485665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.485691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.485785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.485811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.485901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.485926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.486041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.486067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.486177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.486206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.486311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.486343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.486454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.486480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.486567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.486593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.486680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.486706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.486795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.486822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.486903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.486930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.487018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.487043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.487169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.487194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.487318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.487344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.487433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.487459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.487540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.487566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.487643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.487667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.487756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.487781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.487879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.487904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.487992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.488016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.488097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.488122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.488198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.488223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.488316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.488341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.488421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.488446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.488530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.488555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.488632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.488657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.488761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.488786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.488857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.488882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.296 [2024-11-15 11:44:30.488971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.296 [2024-11-15 11:44:30.488995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.296 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.489074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.489099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.489185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.489211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.489327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.489353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.489454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.489499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.489598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.489625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.489712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.489738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.489821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.489847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.489926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.489950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.490062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.490087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.490170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.490197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.490287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.490325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.490411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.490438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.490523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.490550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.490644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.490670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.490757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.490782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.490904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.490931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.491014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.491040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.491137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.491162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.491271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.491296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.491382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.491407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.491516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.491541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.491651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.491677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.491786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.491811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.491899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.491924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.492035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.492061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.492172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.492197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.492274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.492299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.492393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.492421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.492504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.492530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.492618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.492644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.492735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.492766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.492846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.492874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.492965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.492990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.493096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.493124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.493207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.493233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.493310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.493335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.493422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.297 [2024-11-15 11:44:30.493446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.297 qpair failed and we were unable to recover it. 00:25:50.297 [2024-11-15 11:44:30.493534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.493559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.493643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.493669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.493754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.493779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.493866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.493891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.494001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.494026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.494131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.494169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.494262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.494290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.494392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.494418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.494502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.494529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.494622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.494648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.494732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.494758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.494840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.494866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.494980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.495006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.495091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.495117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.495199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.495225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.495335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.495364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.495489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.495515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.495601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.495626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.495714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.495740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.495823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.495848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.495936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.495965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.496079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.496106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.496189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.496214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.496325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.496352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.496437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.496462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.496547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.496572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.496660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.496685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.298 [2024-11-15 11:44:30.496766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.298 [2024-11-15 11:44:30.496791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.298 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.496904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.496930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.497045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.497074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.497164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.497192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.497282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.497314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.497406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.497432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.497521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.497547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.497637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.497663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.497750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.497777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.497876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.497904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.498014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.498041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.498122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.498148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.498257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.498282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.498375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.498401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.498488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.498514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.498595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.498620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.498723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.498749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.498837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.498864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.498954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.498981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.499117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.499143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.499261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.499287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.499409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.499436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.499521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.499546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.499638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.499664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.499752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.499779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.499864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.499889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.500003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.500029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.500126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.500165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.500262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.500292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.500397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.500424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.500505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.500530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.299 [2024-11-15 11:44:30.500637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.299 [2024-11-15 11:44:30.500662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.299 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.500753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.500779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.500868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.500900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.500988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.501013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.501119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.501159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.501249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.501276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.501377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.501404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.501495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.501521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.501606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.501632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.501734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.501761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.501854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.501880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.501974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.502001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.502092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.502117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.502202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.502229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.502323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.502349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.502462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.502487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.502575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.502600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.502682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.502707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.502800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.502824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.502943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.502971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.503054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.503080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.503166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.503192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.503283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.503319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.503408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.503433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.503553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.503579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.503674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.503701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.503835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.503860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.503941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.503966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.503994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:50.300 [2024-11-15 11:44:30.504057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.504080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.504179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.504218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.504315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.504345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.504435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.504464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.504560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.504587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.504671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.504698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.300 [2024-11-15 11:44:30.504791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.300 [2024-11-15 11:44:30.504817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.300 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.504966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.504993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.505077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.505103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.505213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.505238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.505344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.505370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.505461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.505486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.505575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.505600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.505685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.505711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.505801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.505831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.505942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.505967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.506056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.506085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.506176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.506204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.506291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.506324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.506418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.506446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.506533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.506561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.506649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.506675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.506787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.506813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.506902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.506928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.507010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.507035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.507170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.507197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.507329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.507355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.507467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.507493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.507627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.507652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.507782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.507807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.507937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.507964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.508057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.508082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.508166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.508191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.508325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.508351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.508443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.508468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.508575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.508600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.508723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.508748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.508860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.301 [2024-11-15 11:44:30.508886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.301 qpair failed and we were unable to recover it. 00:25:50.301 [2024-11-15 11:44:30.508977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.509002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.509094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.509119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.509205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.509230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.509322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.509352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.509491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.509517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.509597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.509622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.509710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.509734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.509852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.509877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.510004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.510044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.510141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.510169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.510266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.510320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.510438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.510464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.510546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.510571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.510659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.510683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.510775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.510800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.510929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.510954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.511041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.511066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.511188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.511227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.511347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.511375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.511495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.511521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.511613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.511639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.511728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.511753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.511843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.511868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.511982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.512007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.512084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.512110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.512214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.512239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.512338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.512367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.512481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.512506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.512597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.512622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.512698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.512723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.512840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.512873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.512987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.513012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.513126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.513153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.302 [2024-11-15 11:44:30.513237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.302 [2024-11-15 11:44:30.513263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.302 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.513367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.513394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.513478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.513503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.513592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.513618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.513707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.513733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.513816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.513841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.513926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.513951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.514039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.514065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.514153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.514179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.514267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.514298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.514395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.514421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.514518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.514544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.514640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.514666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.514758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.514783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.514868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.514893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.515003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.515027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.515124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.515150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.515262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.515309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.515406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.515433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.515551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.515577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.515720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.303 [2024-11-15 11:44:30.515746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.303 qpair failed and we were unable to recover it. 00:25:50.303 [2024-11-15 11:44:30.515826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.515851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.515982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.516022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.516126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.516154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.516237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.516272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.516394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.516420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.516518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.516544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.516635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.516662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.516750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.516776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.516868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.516894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.516984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.517009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.517097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.517123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.517225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.517264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.517368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.517397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.517484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.517510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.517613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.517641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.517726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.517753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.517836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.517862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.517978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.518004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.518098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.518124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.518238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.518264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.518361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.518389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.518482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.518508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.518658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.518685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.518799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.518825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.518944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.518970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.519054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.519080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.519167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.519193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.519282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.519319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.519416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.519441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.519527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.519554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.519675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.519701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.519790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.519815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.519957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.519986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.304 [2024-11-15 11:44:30.520103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.304 [2024-11-15 11:44:30.520129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.304 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.520209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.520235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.520323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.520350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.520438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.520463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.520553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.520579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.520664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.520691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.520771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.520797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.520915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.520941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.521031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.521058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.521148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.521173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.521264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.521294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.521417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.521442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.521525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.521550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.521639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.521664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.521747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.521773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.521855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.521881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.521961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.521986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.522066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.522092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.522200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.522226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.522319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.522345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.522423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.522449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.522531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.522558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.522654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.522680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.522769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.522795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.522910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.522935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.523022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.523047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.523157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.523185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.523299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.523335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.523420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.523446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.523536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.523563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.523652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.523678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.523769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.523796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.523876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.523902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.524013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.524038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.524121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.305 [2024-11-15 11:44:30.524146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.305 qpair failed and we were unable to recover it. 00:25:50.305 [2024-11-15 11:44:30.524283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.524314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.524426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.524452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.524569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.524594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.524714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.524742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.524855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.524881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.524968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.524994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.525088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.525114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.525256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.525282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.525382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.525409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.525499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.525525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.525606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.525632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.525718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.525743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.525837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.525864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.525959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.525984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.526094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.526119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.526234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.526264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.526389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.526415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.526494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.526520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.526616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.526643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.526757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.526784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.526873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.526900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.527008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.527035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.527121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.527148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.527233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.527259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.527355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.527382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.527477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.527503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.527590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.527617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.527697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.527723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.527800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.527827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.527924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.527951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.528062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.528090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.528205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.528230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.306 [2024-11-15 11:44:30.528320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.306 [2024-11-15 11:44:30.528346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.306 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.528438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.528465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.528543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.528568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.528653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.528679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.528765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.528791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.528879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.528905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.529019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.529045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.529158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.529187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.529278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.529312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.529401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.529427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.529523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.529549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.529634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.529660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.529766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.529792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.529880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.529906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.529998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.530024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.530104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.530130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.530219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.530247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.530343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.530369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.530480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.530506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.530591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.530621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.530736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.530762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.530850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.530876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.530994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.531022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.531117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.531148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.531256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.531282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.531403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.531430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.531517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.531543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.531626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.531652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.531736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.531762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.531877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.531902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.531994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.532019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.532112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.532138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.532229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.532256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.532345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.532371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.532458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.532484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.532572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.307 [2024-11-15 11:44:30.532598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.307 qpair failed and we were unable to recover it. 00:25:50.307 [2024-11-15 11:44:30.532683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.532709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.532804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.532831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.532920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.532945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.533031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.533056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.533169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.533194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.533271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.533297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.533387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.533411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.533493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.533520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.533611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.533637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.533728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.533754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.533871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.533896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.533982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.534007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.534089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.534114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.534226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.534256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.534396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.534446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.534564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.534602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.534730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.534757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.534866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.534892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.534986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.535012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.535122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.535147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.535277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.535333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.535425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.535453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.535539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.535565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.535656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.535682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.535767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.535794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.535882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.535908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.535997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.536023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.536138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.536170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.536263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.536288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.308 [2024-11-15 11:44:30.536382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.308 [2024-11-15 11:44:30.536409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.308 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.536523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.536549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.536663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.536688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.536796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.536822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.536912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.536937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.537022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.537049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.537138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.537166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.537286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.537321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.537437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.537463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.537546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.537572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.537660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.537686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.537774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.537800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.537889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.537915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.538006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.538032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.538157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.538191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.538327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.538356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.538442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.538469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.538558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.538586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.538703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.538730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.538811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.538837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.538944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.538971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.539096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.539124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.539210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.539236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.539338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.539367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.539460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.539486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.539599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.539626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.539714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.539740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.539828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.539853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.539944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.539970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.540061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.540087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.540173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.540199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.540278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.540310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.540422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.540447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.309 [2024-11-15 11:44:30.540554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.309 [2024-11-15 11:44:30.540579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.309 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.540659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.540684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.540800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.540825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.540917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.540948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.541036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.541062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.541181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.541215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.541328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.541355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.541437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.541463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.541553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.541579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.541669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.541697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.541777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.541803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.541914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.541939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.542022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.542047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.542123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.542149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.542240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.542265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.542354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.542379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.542462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.542489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.542570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.542604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.542719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.542748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.542841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.542868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.543454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.543486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.543575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.543602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.543692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.543718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.543806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.543832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.543917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.543943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.544041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.544067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.544182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.544208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.544293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.544329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.544420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.544447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.544561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.544587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.544703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.544730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.310 [2024-11-15 11:44:30.544817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.310 [2024-11-15 11:44:30.544842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.310 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.544954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.544980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.545069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.545095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.545198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.545224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.545321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.545348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.545434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.545460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.545553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.545582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.545673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.545700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.545792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.545819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.545914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.545940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.546032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.546059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.546169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.546195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.546279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.546310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.546426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.546452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.546536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.546567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.546662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.546687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.546771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.546797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.546912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.546937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.547026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.547053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.547138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.547164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.547245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.547271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.547364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.547391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.547473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.547500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.547585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.547611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.547693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.547720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.547834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.547860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.547973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.547999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.548083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.548111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.548229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.548254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.548366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.548393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.548504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.548530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.548640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.548665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.311 [2024-11-15 11:44:30.548772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.311 [2024-11-15 11:44:30.548798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.311 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.548886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.548913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.549001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.549028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.549137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.549164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.549251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.549277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.549400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.549426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.549513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.549539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.549627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.549653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.549761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.549787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.549881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.549919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.550022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.550052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.550175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.550211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.550328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.550355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.550446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.550473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.550560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.550586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.550671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.550698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.550784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.550810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.550920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.550946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.551032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.551059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.551149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.551174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.551273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.551298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.551400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.551427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.551511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.551542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.551660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.551686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.551824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.551850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.551942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.551968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.552065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.552091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.552193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.552232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.552342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.552381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.552476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.552504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.552598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.552624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.552736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.552761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.552854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.552879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.552967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.552994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.312 [2024-11-15 11:44:30.553082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.312 [2024-11-15 11:44:30.553108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.312 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.553197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.553224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.553344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.553371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.553453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.553479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.553565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.553591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.553679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.553705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.553822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.553847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.553957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.553983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.554068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.554094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.554196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.554229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.554358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.554388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.554479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.554506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.554594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.554625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.554734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.554760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.554845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.554871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.555023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.555050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.555170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.555195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.555327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.555354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.555439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.555464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.555547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.555573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.555689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.555715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.555826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.555852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.555939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.555964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.556073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.556099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.556184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.556211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.556299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.556335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.556423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.556449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.556537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.556563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.556673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.313 [2024-11-15 11:44:30.556704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.313 qpair failed and we were unable to recover it. 00:25:50.313 [2024-11-15 11:44:30.556796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.556822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.556926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.556952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.557032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.557059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.557169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.557194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.557284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.557315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.557409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.557434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.557544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.557572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.557653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.557680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.557762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.557788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.557877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.557902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.558003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.558042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.558163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.558190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.558311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.558338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.558427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.558452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.558544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.558571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.558685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.558711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.558827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.558853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.558971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.558996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.559120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.559158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.559253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.559282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.559376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.559403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.559519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.559545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.559661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.559687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.559782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.559809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.559903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.559930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.560044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.560070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.560161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.560187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.560312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.560338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.314 qpair failed and we were unable to recover it. 00:25:50.314 [2024-11-15 11:44:30.560416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.314 [2024-11-15 11:44:30.560442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.560553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.560578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.560667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.560695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.560779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.560805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.560899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.560926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.561020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.561046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.561131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.561157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.561244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.561271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.561367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.561394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.561490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.561528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.561621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.561649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.561741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.561775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.561861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.561887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.561983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.562011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.562113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.562152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.562249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.562276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.562373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.562400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.562512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.562537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.562632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.562658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.562751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.562778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.562856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.562881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.562963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.562989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.563079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.563106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.563220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.563247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.563340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.563366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.563487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.563512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.563596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.563622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.563729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.563754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.563841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.563867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.563957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.563983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.564086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.564126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.315 [2024-11-15 11:44:30.564224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.315 [2024-11-15 11:44:30.564252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.315 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.564354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.564386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.564501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.564537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.564679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.564714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.564828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.564864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.564958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.564984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.565066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.565092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.565188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.565228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.565350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.565380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.565498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.565524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.565640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.565666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.565752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.565778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.565864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.565892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.565974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.566000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.566089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.566116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.566211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.566236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.566324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.566351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.566467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.566492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.566578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.566604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.566718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.566743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.566837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.566873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.566964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.566999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.567132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.567160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.567246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.567272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.567378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.567407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.567491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.567517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.567606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.567632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.567741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.567768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.567853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.567879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.567970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.567996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.568077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.568103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.568193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.568218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.316 qpair failed and we were unable to recover it. 00:25:50.316 [2024-11-15 11:44:30.568312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.316 [2024-11-15 11:44:30.568340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.568430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.568456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.568576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.568603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.568685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.568710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.568823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.568848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.568947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.568973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.569066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.569094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.569184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.569211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.569325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.569351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.569438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.569464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.569552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.569579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.569663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.569689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.569775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.569803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.569895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.569921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.570005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.570031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.570136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.570163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.570316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.570343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.570429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.570455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.570536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.570562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.570656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.570682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.570769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.570796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.570877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.570904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.571015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.571041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.571125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.571150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.571231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.571257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.571348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.571375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.571467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.571493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.571579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.571607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.571694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.571726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.571816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.571842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.571954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.571981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.572073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.572099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.572201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.317 [2024-11-15 11:44:30.572241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.317 qpair failed and we were unable to recover it. 00:25:50.317 [2024-11-15 11:44:30.572350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.572378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.572489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.572514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.572600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.572625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.572741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.572767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.572867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.572892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.573005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.573031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.573146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.573172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.573259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.573284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.573407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.573432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.573526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.573553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.573661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.573687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.573775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.573800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.573876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.573901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.573999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.574025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.574116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.574142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.574227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.574253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.574351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.574378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.574489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.574515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.574632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.574657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.574742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.574767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.574877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.574902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.574978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.575003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.575119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.575144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.575233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.575259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.575355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.575381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.575465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.575490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.575574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.575599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.575685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.575710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.575790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.575815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.575908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.575948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.576043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.576071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.318 [2024-11-15 11:44:30.576155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.318 [2024-11-15 11:44:30.576181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.318 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.576287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.576335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.576419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.576446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.576522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.576548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.576628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.576660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.576742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.576768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.576850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.576876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.576968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.576995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.577102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.577140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.577252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.577292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.577393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.577420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.577497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.577522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.577604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.577630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.577711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.577738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.577832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.577858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.577882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.319 [2024-11-15 11:44:30.577918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.319 [2024-11-15 11:44:30.577933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.319 [2024-11-15 11:44:30.577934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.577945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.319 [2024-11-15 11:44:30.577956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.319 [2024-11-15 11:44:30.577959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.578050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.578075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.578166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.578194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.578280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.578312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.578403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.578429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.578521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.578549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.578662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.578689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.578776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.578802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.578895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.319 [2024-11-15 11:44:30.578923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.319 qpair failed and we were unable to recover it. 00:25:50.319 [2024-11-15 11:44:30.579014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.579040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.579180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.579208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.579292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.579325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.579442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.579468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.579553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.579578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.579693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.579724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.579811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.579836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.579925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.579951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.579916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:50.320 [2024-11-15 11:44:30.579973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:50.320 [2024-11-15 11:44:30.580069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.580030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:50.320 [2024-11-15 11:44:30.580098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b9[2024-11-15 11:44:30.580039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:50.320 0 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.580198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.580223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.580315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.580341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.580432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.580458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.580541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.580567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.580654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.580681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.580773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.580799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.580893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.580919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.581037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.581063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.581161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.581201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.581309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.581337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.581429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.581455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.581545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.581572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.581663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.581689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.581775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.581800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.581889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.581916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.581993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.582018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.582103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.582129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.582208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.582233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.582337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.582363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.582558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.582583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.582705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.582732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.582823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.582849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.320 [2024-11-15 11:44:30.582940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.320 [2024-11-15 11:44:30.582967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.320 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.583053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.583079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.583178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.583203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.583289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.583321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.583404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.583430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.583517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.583542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.583622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.583647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.583740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.583767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.583848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.583874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.583970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.583994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.584104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.584129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.584207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.584232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.584320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.584345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.584432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.584465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.584563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.584602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.584701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.584730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.584848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.584875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.584975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.585001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.585108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.585147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.585238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.585266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.585362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.585394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.585507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.585533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.585618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.585644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.585733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.585758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.585845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.585870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.585951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.585976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.586058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.586084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.586172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.586198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.586282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.586315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.586435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.586460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.586541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.586566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.586655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.586680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.586796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.586821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.586906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.586931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.587014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.587039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.587119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.587144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.321 [2024-11-15 11:44:30.587231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.321 [2024-11-15 11:44:30.587256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.321 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.587363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.587403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.587507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.587546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.587664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.587691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.587784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.587816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.587896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.587921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.588007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.588039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.588126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.588152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.588250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.588281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.588383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.588411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.588498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.588524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.588611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.588638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.588729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.588761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.588859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.588886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.588969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.588995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.589084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.589109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.589190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.589215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.589343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.589368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.589455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.589481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.589568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.589595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.589690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.589716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.589807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.589833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.589914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.589938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.590026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.590054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.590142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.590169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.590254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.590280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.590368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.590394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.590476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.590502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.590612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.590636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.590746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.590771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.590852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.590877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.322 qpair failed and we were unable to recover it. 00:25:50.322 [2024-11-15 11:44:30.590978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.322 [2024-11-15 11:44:30.591023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.591137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.591166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.591258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.591286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.591374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.591401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.591480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.591505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.591588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.591613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.591699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.591732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.591824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.591850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.591943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.591969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.592055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.592080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.592158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.592183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.592269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.592298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.592398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.592424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.592506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.592532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.592630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.592657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.592742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.592769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.592851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.592878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.592962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.592990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.593087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.593126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.593223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.593263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.593352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.593379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.593458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.593482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.593576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.593602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.593734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.593759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.593855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.593883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.593997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.594024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.594116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.594156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.594247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.594280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.594396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.594424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.594511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.594538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.594622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.594649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.594734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.594759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.323 [2024-11-15 11:44:30.594854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.323 [2024-11-15 11:44:30.594882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.323 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.594974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.595000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.595088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.595114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.595204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.595231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.595336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.595375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.595463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.595491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.595574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.595600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.595679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.595704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.595796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.595822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.595913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.595939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.596020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.596048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.596148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.596189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.596293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.596329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.596418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.596445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.596554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.596581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.596694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.596721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.596805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.596831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.596918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.596946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.597027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.597054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.597151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.597177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.597273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.597299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.597391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.597417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.597532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.597557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.597644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.597670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.597756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.597783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.597876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.597902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.597990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.598017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.598103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.598129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.598217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.598242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.598324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.598350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.324 [2024-11-15 11:44:30.598428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.324 [2024-11-15 11:44:30.598453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.324 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.598534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.598559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.598679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.598705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.598798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.598824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.598909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.598934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.599012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.599038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.599121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.599147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.599229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.599257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.599374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.599414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.599507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.599534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.599630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.599658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.599742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.599768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.599858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.599885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.599968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.599995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.600131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.600156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.600243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.600271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.600376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.600402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.600518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.600545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.600631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.600657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.600744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.600769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.600850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.600875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.600961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.600988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.601081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.601112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.601246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.601286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.601390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.601418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.601532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.601557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.601650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.601676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.601756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.601782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.601872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.601900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.601984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.602010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.602116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.602156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.602242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.602270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.602370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.602403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.602487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.602513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.602592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.325 [2024-11-15 11:44:30.602619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.325 qpair failed and we were unable to recover it. 00:25:50.325 [2024-11-15 11:44:30.602703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.602730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.602817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.602844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.602941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.602966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.603043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.603068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.603150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.603175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.603269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.603297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.603391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.603416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.603525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.603551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.603642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.603667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.603747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.603773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.603861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.603888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.604003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.604028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.604101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.604126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.604221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.604250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.604341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.604369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.604456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.604481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.604613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.604640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.604736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.604767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.604884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.604909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.605005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.605033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.605117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.605143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.605222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.605248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.605364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.605390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.605470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.605495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.605598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.605637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.605724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.605752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.605838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.605864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.605950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.605976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.606070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.606098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.606191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.606230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.606328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.606356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.606439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.606465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.606551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.606576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.326 qpair failed and we were unable to recover it. 00:25:50.326 [2024-11-15 11:44:30.606702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.326 [2024-11-15 11:44:30.606727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.606810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.606837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.606920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.606947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.607028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.607054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.607137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.607170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.607260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.607293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.607384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.607410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.607505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.607531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.607619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.607646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.607728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.607754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.607838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.607866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.607947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.607972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.608099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.608125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.608214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.608239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.608337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.608376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.608478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.608517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.608646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.608674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.608761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.608789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.608877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.608904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.608990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.609018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.609103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.609130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.609223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.609252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.609387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.609413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.609505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.609531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.609623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.609648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.609724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.609752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.609840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.609866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.609952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.609979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.610055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.610081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.610188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.610214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.610295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.610335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.610421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.610451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.327 [2024-11-15 11:44:30.610567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.327 [2024-11-15 11:44:30.610593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.327 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.610667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.610692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.610785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.610810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.610898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.610924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.611005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.611030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.611115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.611142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.611240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.611268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.611374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.611401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.611485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.611511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.611637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.611664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.611756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.611782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.611868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.611894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.611976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.612002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.612101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.612128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.612207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.612233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.612326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.612354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.612433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.612458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.612537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.612563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.612649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.612674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.612791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.612816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.612907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.612932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.613011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.613036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.613115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.613140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.613251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.613276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.613372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.613400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.613496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.613523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.613644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.613672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.613753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.613778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.613865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.613889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.613971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.613996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.614111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.614136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.614232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.614272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.614373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.614402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.614489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.614516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.328 qpair failed and we were unable to recover it. 00:25:50.328 [2024-11-15 11:44:30.614607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.328 [2024-11-15 11:44:30.614632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.614719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.614744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.614837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.614862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.614947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.614973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.615062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.615089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.615178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.615205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.615310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.615336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.615416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.615441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.615522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.615547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.615671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.615696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.615771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.615797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.615881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.615909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.615994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.616022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.616150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.616190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.616290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.616333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.616430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.616456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.616544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.616570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.616700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.616727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.616807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.616832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.616926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.616952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.617039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.617065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.617178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.617204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.617335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.617361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.617467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.617492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.617585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.617611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.617696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.617722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.617810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.617836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.617916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.617941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.618017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.618043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.618122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.618148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.618235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.618260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.329 [2024-11-15 11:44:30.618345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.329 [2024-11-15 11:44:30.618371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.329 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.618457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.618487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.618574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.618599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.618684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.618711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.618813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.618838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.618921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.618946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.619030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.619055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.619160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.619200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.619288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.619323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.619419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.619445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.619532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.619558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.619646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.619672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.619763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.619790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.619876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.619903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.619994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.620034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.620138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.620166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.620280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.620312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.620429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.620455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.620547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.620582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.620669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.620695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.620787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.620813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.620907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.620933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.621014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.621040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.621129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.621156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.621241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.621270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.621364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.621392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.621478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.621503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.621586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.621612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.621726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.621752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.621847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.621875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.621964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.621990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.622074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.622099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.330 [2024-11-15 11:44:30.622182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.330 [2024-11-15 11:44:30.622207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.330 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.622285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.622317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.622417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.622456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.622555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.622583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.622665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.622691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.622779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.622804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.622886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.622912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.622997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.623022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.623137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.623163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.623278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.623310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.623403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.623429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.623516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.623542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.623625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.623651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.623735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.623759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.623835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.623860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.623968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.623993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.624112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.624141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.624224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.624251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.624348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.624379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.624472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.624499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.624588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.624615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.624696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.624722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.624820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.624846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.624935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.624963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.625051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.625077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.625163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.625189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.625273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.625298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.625388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.625415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.625494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.625520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.625603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.625629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.625716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.625743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.625865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.625892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.625980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.331 [2024-11-15 11:44:30.626007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.331 qpair failed and we were unable to recover it. 00:25:50.331 [2024-11-15 11:44:30.626104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.626143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.626235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.626266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.626366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.626392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.626473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.626503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.626590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.626615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.626690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.626716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.626826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.626853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.626935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.626964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.627052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.627078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.627179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.627205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.627284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.627319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.627410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.627437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.627524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.627549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.627647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.627675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.627760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.627787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.627899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.627927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.628025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.628052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.628174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.628200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.628280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.628321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.628429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.628456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.628540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.628566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.628701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.628726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.628806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.628831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.628922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.628947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.629055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.629081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.629162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.629189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.629272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.629297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.629415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.629440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.629524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.629549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.629635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.629660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.629739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.629769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.629859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.629884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.629992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.332 [2024-11-15 11:44:30.630017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.332 qpair failed and we were unable to recover it. 00:25:50.332 [2024-11-15 11:44:30.630094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.630119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.630199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.630226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.630313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.630339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.630447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.630473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.630554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.630579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.630669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.630694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.630784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.630809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.630884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.630908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.631005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.631045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.631154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.631183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.631280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.631313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.631407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.631433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.631523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.631550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.631631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.631655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.631770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.631796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.631888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.631927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.632042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.632082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.632166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.632192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.632283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.632325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.632409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.632435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.632518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.632543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.632626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.632651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.632732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.632757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.632854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.632883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.632975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.633002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.633082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.633108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.633189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.633214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.633294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.633332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.333 [2024-11-15 11:44:30.633423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.333 [2024-11-15 11:44:30.633450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.333 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.633539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.633564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.633649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.633673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.633783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.633809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.633893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.633918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.634012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.634040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.634130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.634162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.634253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.634281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.634369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.634395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.634476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.634506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.634590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.634616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.634697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.634723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.634808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.634833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.634919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.634944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.635027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.635055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.635136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.635162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.635243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.635269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.635409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.635436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.635546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.635572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.635663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.635690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.635773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.635800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.635880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.635905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.635989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.636018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.636120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.636147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.636234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.636261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.636363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.636390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.636473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.636500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.636585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.636610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.636692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.636718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.636811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.636837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.636919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.636945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.637053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.637079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.637188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.334 [2024-11-15 11:44:30.637214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.334 qpair failed and we were unable to recover it. 00:25:50.334 [2024-11-15 11:44:30.637294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.637328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.637410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.637436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.637545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.637570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.637660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.637686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.637779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.637805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.637892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.637919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.638000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.638025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.638114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.638142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.638277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.638310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.638398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.638424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.638532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.638558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.638645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.638671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.638753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.638779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.638871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.638897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.638989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.639014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.639116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.639156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.639245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.639277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.639395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.639424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.639513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.639539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.639623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.639650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.639730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.639756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.639843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.639869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.639983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.335 [2024-11-15 11:44:30.640009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.335 qpair failed and we were unable to recover it. 00:25:50.335 [2024-11-15 11:44:30.640086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.640111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.640195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.640220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.640313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.640339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.640425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.640450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.640531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.640557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.640667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.640693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.640790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.640815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.640900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.640928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.641010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.641037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.641125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.641152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.641245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.641271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.641372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.641399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.641480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.641506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.641608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.641636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.641721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.641748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.641859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.641884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.641977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.642003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.642094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.642120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.642201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.642227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.642388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.642416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.642516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.642555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.642654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.642680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.642794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.642819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.642905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.642930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.643017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.643042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.643133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.643161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.643243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.643270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.643362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.643389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.643472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.643497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.643581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.643607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.643720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.643745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.643826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.643853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.643934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.643959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.336 [2024-11-15 11:44:30.644058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.336 [2024-11-15 11:44:30.644097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.336 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.644198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.644224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.644312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.644340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.644454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.644479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.644566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.644591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.644674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.644700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.644785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.644811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.644896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.644926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.645009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.645036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.645168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.645195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.645278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.645309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.645426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.645451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.645550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.645576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.645654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.645679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.645770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.645795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.645874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.645899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.645990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.646014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.646093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.646118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.646193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.646218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.646312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.646338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.646422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.646447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.646531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.646556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.646654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.646680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.646785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.646825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.646930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.646958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.647062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.647099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.647195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.647221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.647313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.647344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.647434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.647459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.647540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.647566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.647658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.647683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.647767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.647792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.647874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.337 [2024-11-15 11:44:30.647899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.337 qpair failed and we were unable to recover it. 00:25:50.337 [2024-11-15 11:44:30.647980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.648005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.648106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.648146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.648237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.648263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.648360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.648388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.648489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.648523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.648630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.648657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.648733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.648759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.648846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.648871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.648975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.649003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.649082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.649109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.649191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.649216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.649324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.649355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.649454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.649485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.649577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.649604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.649689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.649716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.649792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.649817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.649920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.649945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.650030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.650056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.650141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.650169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.650255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.650281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.650375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.650401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.650487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.650517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.650597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.338 [2024-11-15 11:44:30.650622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.338 qpair failed and we were unable to recover it. 00:25:50.338 [2024-11-15 11:44:30.650736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.624 [2024-11-15 11:44:30.650762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.624 qpair failed and we were unable to recover it. 00:25:50.624 [2024-11-15 11:44:30.650850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.624 [2024-11-15 11:44:30.650876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.624 qpair failed and we were unable to recover it. 00:25:50.624 [2024-11-15 11:44:30.650960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.624 [2024-11-15 11:44:30.650985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.624 qpair failed and we were unable to recover it. 00:25:50.624 [2024-11-15 11:44:30.651060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.624 [2024-11-15 11:44:30.651085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.624 qpair failed and we were unable to recover it. 00:25:50.624 [2024-11-15 11:44:30.651169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.624 [2024-11-15 11:44:30.651197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.624 qpair failed and we were unable to recover it. 00:25:50.624 [2024-11-15 11:44:30.651294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.624 [2024-11-15 11:44:30.651329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.624 qpair failed and we were unable to recover it. 00:25:50.624 [2024-11-15 11:44:30.651424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.624 [2024-11-15 11:44:30.651451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.624 qpair failed and we were unable to recover it. 00:25:50.624 [2024-11-15 11:44:30.651537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.624 [2024-11-15 11:44:30.651563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.624 qpair failed and we were unable to recover it. 00:25:50.624 [2024-11-15 11:44:30.651652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.624 [2024-11-15 11:44:30.651680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.624 qpair failed and we were unable to recover it. 00:25:50.624 [2024-11-15 11:44:30.651763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.624 [2024-11-15 11:44:30.651789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.624 qpair failed and we were unable to recover it. 00:25:50.624 [2024-11-15 11:44:30.651880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.624 [2024-11-15 11:44:30.651906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.624 qpair failed and we were unable to recover it. 00:25:50.624 [2024-11-15 11:44:30.651996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.624 [2024-11-15 11:44:30.652022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.624 qpair failed and we were unable to recover it. 00:25:50.624 [2024-11-15 11:44:30.652165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.624 [2024-11-15 11:44:30.652195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.624 qpair failed and we were unable to recover it. 00:25:50.624 [2024-11-15 11:44:30.652289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.624 [2024-11-15 11:44:30.652322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.624 qpair failed and we were unable to recover it. 00:25:50.624 [2024-11-15 11:44:30.652406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.624 [2024-11-15 11:44:30.652431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.624 qpair failed and we were unable to recover it. 00:25:50.624 [2024-11-15 11:44:30.652517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.624 [2024-11-15 11:44:30.652542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.624 qpair failed and we were unable to recover it. 00:25:50.624 [2024-11-15 11:44:30.652633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.624 [2024-11-15 11:44:30.652658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.624 qpair failed and we were unable to recover it. 00:25:50.624 [2024-11-15 11:44:30.652745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.652771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.652903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.652928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.653016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.653045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.653137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.653166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.653258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.653285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.653399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.653426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.653514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.653549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.653637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.653663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.653752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.653784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.653873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.653898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.654027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.654054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.654139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.654165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.654249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.654274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.654393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.654423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.654531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.654558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.654644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.654670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.654788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.654816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.654906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.654933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.655017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.655043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.655135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.655161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.655242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.655267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.655368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.655395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.655519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.655546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.655632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.655658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.655789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.655814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.655900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.655925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.656013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.656039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.656127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.656152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.656236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.656262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.656361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.656399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.656487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.656514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.656602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.656628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.656717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.656744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.656826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.625 [2024-11-15 11:44:30.656852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.625 qpair failed and we were unable to recover it. 00:25:50.625 [2024-11-15 11:44:30.656942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.656974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.657054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.657080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.657160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.657186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.657273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.657299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.657418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.657443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.657527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.657552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.657647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.657671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.657761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.657788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.657877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.657906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.657990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.658016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.658105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.658131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.658218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.658244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.658336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.658363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.658446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.658475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.658551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.658577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.658696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.658722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.658814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.658840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.658940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.658980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.659071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.659098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.659184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.659210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.659321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.659347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.659431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.659457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.659540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.659565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.659648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.659673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.659763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.659788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.659878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.659903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.660004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.660043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.660129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.660156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.660271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.660297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.660399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.660425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.660507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.660532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.660613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.660638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.660718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.626 [2024-11-15 11:44:30.660743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.626 qpair failed and we were unable to recover it. 00:25:50.626 [2024-11-15 11:44:30.660844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.660884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.660977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.661006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.661110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.661138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.661224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.661250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.661340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.661368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.661482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.661508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.661620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.661645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.661739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.661765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.661847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.661880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.661963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.661990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.662089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.662128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.662228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.662257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.662351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.662378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.662471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.662497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.662588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.662615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.662701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.662728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.662863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.662888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.662978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.663008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.663103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.663129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.663212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.663239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.663333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.663360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.663477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.663503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.663599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.663625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.663706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.663732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.663840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.663868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.663954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.663979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.664067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.664092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.664178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.664205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.664316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.664355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.664460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.664487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.664578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.664603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.664686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.664711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.664802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.627 [2024-11-15 11:44:30.664830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.627 qpair failed and we were unable to recover it. 00:25:50.627 [2024-11-15 11:44:30.664916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.664943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.665037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.665063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.665153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.665180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.665263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.665288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.665378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.665406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.665492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.665518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.665605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.665630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.665715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.665740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.665823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.665851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.665973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.666002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.666100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.666126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.666215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.666242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.666337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.666364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.666444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.666469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.666551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.666576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.666663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.666689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.666790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.666817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.666898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.666925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.667030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.667071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.667192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.667218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.667326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.667352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.667437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.667463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.667540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.667565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.667682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.667707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.667790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.667815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.667938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.667969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.668053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.668080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.668158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.668184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.668276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.668309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.668404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.668430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.668523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.668562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.668655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.668681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.668794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.668819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.668907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.668932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.669018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.628 [2024-11-15 11:44:30.669044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.628 qpair failed and we were unable to recover it. 00:25:50.628 [2024-11-15 11:44:30.669124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.669149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.669231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.669257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.669353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.669383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.669478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.669506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.669591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.669617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.669727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.669753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.669835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.669863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.669949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.669981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.670066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.670091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.670187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.670226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.670321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.670349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.670437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.670465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.670549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.670575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.670683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.670709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.670796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.670822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.670900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.670926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.671009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.671035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.671118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.671144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.671227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.671255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.671350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.671380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.671474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.671500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.671594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.671619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.671703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.671730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.671822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.671848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.671936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.671962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.672049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.672074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.672158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.672185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.629 [2024-11-15 11:44:30.672298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.629 [2024-11-15 11:44:30.672330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.629 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.672416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.672442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.672524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.672549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.672629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.672656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.672745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.672771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.672880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.672906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.672990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.673015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.673117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.673156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.673251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.673290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.673389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.673416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.673501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.673526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.673642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.673666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.673750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.673775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.673873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.673900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.673994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.674023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.674103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.674130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.674211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.674237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.674323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.674350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.674434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.674460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.674543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.674570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.674680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.674711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.674804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.674830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.674907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.674933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.675014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.675042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.675134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.675160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.675242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.675268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.675360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.675387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.675471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.675497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.675574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.675600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.675679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.675706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.675795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.675821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.675904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.675931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.676042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.630 [2024-11-15 11:44:30.676068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.630 qpair failed and we were unable to recover it. 00:25:50.630 [2024-11-15 11:44:30.676152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.676178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.676270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.676298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.676403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.676429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.676543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.676568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.676646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.676672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.676760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.676786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.676877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.676903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.676987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.677013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.677128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.677154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.677238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.677264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.677356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.677383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.677467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.677493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.677579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.677605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.677713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.677738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.677822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.677848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.677961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.677986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.678080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.678120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.678212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.678240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.678321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.678348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.678441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.678469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.678551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.678578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.678662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.678687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.678767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.678794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.678886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.678913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.679055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.679081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.679172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.679198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.679287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.679319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.679414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.679440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.679521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.679547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.679628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.679653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.679743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.679769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.679852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.679879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.631 qpair failed and we were unable to recover it. 00:25:50.631 [2024-11-15 11:44:30.679989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.631 [2024-11-15 11:44:30.680028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.680133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.680172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.680270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.680297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.680394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.680421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.680528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.680553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.680647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.680673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.680754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.680780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.680873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.680899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.680985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.681010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.681096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.681123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.681207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.681232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.681316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.681344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.681453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.681478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.681565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.681590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.681672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.681697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.681786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.681811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.681900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.681926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.682013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.682040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.682129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.682156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.682247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.682277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.682374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.682401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.682492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.682517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.682604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.682635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.682717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.682742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.682830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.682855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.682955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.682983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.683072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.683098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.683218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.683246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.683333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.683359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.683442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.683467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.683584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.683609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.683694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.683721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.683800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.683825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.683919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.683946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.684057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.684084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.632 [2024-11-15 11:44:30.684211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.632 [2024-11-15 11:44:30.684251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.632 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.684359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.684386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.684469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.684495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.684572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.684597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.684670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.684695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.684783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.684808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.684893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.684917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.685046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.685071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.685149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.685174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.685263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.685290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.685385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.685411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.685495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.685521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.685605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.685631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.685742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.685767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.685853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.685884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.686001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.686028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.686129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.686167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.686260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.686287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.686387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.686413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.686525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.686550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.686663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.686688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.686769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.686794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.686904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.686931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.687016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.687042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.687125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.687150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.687234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.687259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.687387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.687414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.687500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.687528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.687612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.687638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.687719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.687745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.687856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.687882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.687971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.687998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.688093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.688121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.688201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.688228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.688319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.688347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.633 [2024-11-15 11:44:30.688455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.633 [2024-11-15 11:44:30.688493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.633 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.688603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.688642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.688740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.688768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.688855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.688881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.688965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.688992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.689076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.689104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.689229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.689268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.689367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.689395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.689489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.689515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.689620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.689646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.689723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.689748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.689832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.689857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.689938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.689963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.690045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.690070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.690158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.690183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.690271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.690300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.690410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.690449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.690538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.690565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.690643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.690669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.690759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.690785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.690877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.690906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.691000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.691027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.691116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.691141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.691222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.691247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.691339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.691366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.691447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.634 [2024-11-15 11:44:30.691472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.634 qpair failed and we were unable to recover it. 00:25:50.634 [2024-11-15 11:44:30.691559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.691584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.691664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.691690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.691769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.691795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.691876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.691904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.692002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.692030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.692126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.692165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.692247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.692275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.692371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.692399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.692492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.692519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.692602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.692628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.692714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.692740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.692828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.692856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.692941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.692969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.693054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.693080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.693168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.693194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.693283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.693318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.693409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.693434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.693514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.693539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.693621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.693646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.693728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.693754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.693833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.693868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.693956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.693984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.694066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.694091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.694167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.694192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.694276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.694312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.694404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.694431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.694517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.694544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.694637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.694664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.694757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.694786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.694901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.694927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.635 [2024-11-15 11:44:30.695016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.635 [2024-11-15 11:44:30.695044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.635 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.695124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.695149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.695231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.695256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.695330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.695356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.695449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.695474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.695558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.695583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.695673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.695701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.695790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.695818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.695908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.695936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.696014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.696039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.696122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.696147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.696227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.696252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.696341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.696367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.696477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.696502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.696584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.696610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.696701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.696728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.696811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.696839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.696928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.696962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.697055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.697081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.697161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.697187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.697269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.697295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.697385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.697412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.697493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.697519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.697604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.697630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.697715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.697740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.697824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.697849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.697935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.697963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.698050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.698077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.698171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.698209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.698297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.698330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.698420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.636 [2024-11-15 11:44:30.698445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.636 qpair failed and we were unable to recover it. 00:25:50.636 [2024-11-15 11:44:30.698533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.698559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.698634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.698660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.698744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.698771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.698853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.698880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.699024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.699064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.699160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.699187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.699282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.699319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.699430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.699456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.699544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.699569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.699649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.699674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.699762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.699790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.699882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.699910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.699998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.700023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.700109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.700136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.700221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.700247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.700333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.700362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.700444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.700470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.700550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.700576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.700655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.700681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.700766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.700792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.700883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.700912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.700993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.701019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.701100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.701127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.701208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.701235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.701320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.701348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.701433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.701457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.701537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.701567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.701658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.701682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.701766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.701793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.701881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.701906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.702016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.702043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.702127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.702153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.702235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.702262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.637 [2024-11-15 11:44:30.702351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.637 [2024-11-15 11:44:30.702378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.637 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.702463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.702490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.702574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.702600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.702683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.702709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.702790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.702816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.702896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.702922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.703008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.703035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.703130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.703155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.703249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.703277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.703376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.703404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.703491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.703516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.703599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.703625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.703704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.703729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.703806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.703831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.703918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.703946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.704031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.704058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.704147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.704176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.704260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.704286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.704375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.704401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.704485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.704511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.704598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.704629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.704721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.704747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.704827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.704853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.704941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.704968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.705055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.705082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.705176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.705216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.705310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.705337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.705434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.705473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.705568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.705595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.705677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.705703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.705783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.705808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.705889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.705915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.705997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.638 [2024-11-15 11:44:30.706022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.638 qpair failed and we were unable to recover it. 00:25:50.638 [2024-11-15 11:44:30.706105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.706130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.706221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.706248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.706350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.706390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.706477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.706504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.706590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.706615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.706699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.706725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.706803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.706829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.706904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.706930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.707041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.707066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.707177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.707203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.707288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.707320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.707409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.707435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.707514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.707541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.707626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.707652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.707733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.707759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.707836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.707862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.707944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.707971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.708052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.708077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.708157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.708182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.708261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.708286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.708398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.708437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.708529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.708557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.708641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.708668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.708754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.708780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.708868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.708894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.708972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.708997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.709090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.709117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.709227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.709253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.709349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.709375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.709460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.709486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.709595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.709622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.639 [2024-11-15 11:44:30.709705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.709732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.709811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.709836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 [2024-11-15 11:44:30.709920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.639 [2024-11-15 11:44:30.709946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.639 qpair failed and we were unable to recover it. 00:25:50.639 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:50.640 [2024-11-15 11:44:30.710031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.710058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.710146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.710173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.710283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.710314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.640 [2024-11-15 11:44:30.710395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.710426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.710503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.710528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.710612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.710638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.640 [2024-11-15 11:44:30.710725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.710751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.710827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.710852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.710933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.710959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.640 [2024-11-15 11:44:30.711048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.711078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.711189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.711216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.711323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.711362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.711462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.711489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.711578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.711605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.711687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.711714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.711811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.711838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.711923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.711949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.712024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.712048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.712157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.712187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.712264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.712289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.712384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.712409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.712490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.712515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.712607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.712635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.712754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.712780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.712865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.712893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.712986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.713012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.713096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.713123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.713208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.713234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.640 [2024-11-15 11:44:30.713326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.640 [2024-11-15 11:44:30.713353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.640 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.713438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.713464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.713544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.713570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.713652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.713676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.713766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.713793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.713879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.713909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.713997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.714025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.714118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.714147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.714228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.714254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.714362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.714389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.714477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.714503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.714582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.714608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.714688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.714713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.714821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.714847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.714958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.714986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.715079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.715106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.715194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.715222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.715326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.715356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.715438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.715464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.715578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.715605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.715684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.715710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.715818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.715844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.715925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.715952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.716044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.716084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.716172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.716199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.716299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.716350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.716448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.716476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.716609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.716635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.716720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.716748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.716832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.716859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.716953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.716987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.717072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.717099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.717214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.717242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.717323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.717350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.641 [2024-11-15 11:44:30.717437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.641 [2024-11-15 11:44:30.717463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.641 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.717541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.717567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.717678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.717704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.717788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.717814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.717904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.717930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.718009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.718034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.718119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.718147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.718231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.718258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.718362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.718389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.718473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.718500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.718618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.718652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.718738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.718764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.718845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.718871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.718956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.718981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.719064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.719094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.719208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.719234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.719325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.719352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.719440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.719467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.719549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.719574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.719650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.719676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.719786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.719812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.719889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.719915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.719996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.720021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.720116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.720144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.720234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.720262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.720366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.720393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.720474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.720499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.720610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.720636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.720734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.720763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.720852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.720879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.720959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.720985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.642 [2024-11-15 11:44:30.721066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.642 [2024-11-15 11:44:30.721091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.642 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.721167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.721193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.721300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.721331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.721416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.721444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.721525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.721552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.721636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.721668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.721757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.721783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.721859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.721885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.722002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.722028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.722113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.722140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.722234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.722260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.722394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.722432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.722529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.722556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.722643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.722669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.722742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.722766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.722861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.722887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.722969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.722996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.723081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.723109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.723192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.723218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.723324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.723365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.723453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.723481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.723565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.723591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.723678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.723705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.723781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.723813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.723894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.723919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.724000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.724026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.724111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.724137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.724235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.724274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.724373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.724401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.724491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.724520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.724605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.724631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.724717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.724743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.724821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.724848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.643 qpair failed and we were unable to recover it. 00:25:50.643 [2024-11-15 11:44:30.724955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.643 [2024-11-15 11:44:30.724981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.725075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.725115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.725235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.725263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.725392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.725418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.725497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.725522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.725602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.725628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.725710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.725735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.725811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.725837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.725928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.725955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.726036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.726061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.726147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.726172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.726290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.726322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.726405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.726432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.726514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.726540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.726625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.726652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.726742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.726767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.726845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.726870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.726977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.727003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.727098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.727138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.727227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.727254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.727350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.727379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.727465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.727491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.727612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.727638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.727721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.727747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.727843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.727869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.727944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.727970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.728060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.728087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.728169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.728195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.728279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.728310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.728402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.728427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.728513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.728541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.728637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.728663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.728775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.728801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.728888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.728913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.729004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.729032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.644 [2024-11-15 11:44:30.729171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.644 [2024-11-15 11:44:30.729200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.644 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.729286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.729320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.729407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.729433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.729524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.729550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.729631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.729661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.729758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.729785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.729898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.729924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.730004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.730031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.730114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.730140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.730227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.730254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.730340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.730369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.730456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.730484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.730568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.730594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.730678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.730704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.730788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.730815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.730914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.730965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.731052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.731081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.731166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.731194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.731316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.731344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.731433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.731459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.731542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.731567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.731653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.731680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.731769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.731795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.731880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.731908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.731996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.732023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.732113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.732140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.732219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.732246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.645 [2024-11-15 11:44:30.732334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.645 [2024-11-15 11:44:30.732378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.645 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.732513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.732540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.732622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.732648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.732731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.732758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.732856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.732882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.732965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.732991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.733093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.733133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.733230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.733258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.733367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.733395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.733480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.733507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.733597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.733622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.733714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.733739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.733822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.733846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.733928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.733953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.734029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.734055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.734141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.734168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.734249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.734275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.734389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.734435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.734534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.734560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.734674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.734700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.734812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.734838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.734929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.734954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.735037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.735063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.735147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.735172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.735251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.735276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.735376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.735403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.735488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.735513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.735634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.735659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.735738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.735763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.735878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.646 [2024-11-15 11:44:30.735903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.736025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.736070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 [2024-11-15 11:44:30.736161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.736188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.646 qpair failed and we were unable to recover it. 00:25:50.646 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:50.646 [2024-11-15 11:44:30.736275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.646 [2024-11-15 11:44:30.736317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.736402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.736428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.647 [2024-11-15 11:44:30.736512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.736539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.736618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.736644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.647 [2024-11-15 11:44:30.736727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.736756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.736850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.736876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.736961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.736986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.737064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.737089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.737166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.737191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.737282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.737324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.737413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.737444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.737554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.737580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.737696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.737722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.737823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.737849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.737928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.737954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.738032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.738057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.738137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.738163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.738265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.738311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.738405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.738432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.738520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.738549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.738639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.738665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.738750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.738776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.738859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.738885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.738963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.738989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.739093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.739132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.739220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.739247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.739334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.739363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.739455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.739481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.739562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.739589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.739677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.739703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.647 qpair failed and we were unable to recover it. 00:25:50.647 [2024-11-15 11:44:30.739782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.647 [2024-11-15 11:44:30.739808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.739890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.739915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.739999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.740025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.740101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.740127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.740249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.740288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.740402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.740428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.740508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.740534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.740647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.740673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.740750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.740776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.740867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.740894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.740980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.741008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.741131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.741170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.741259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.741288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.741390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.741416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.741503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.741530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.741610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.741637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.741741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.741767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.741850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.741878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.741963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.741991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.742076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.742103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.742183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.742215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.742291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.742323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.742415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.742440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.742521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.742547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.742630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.742656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.742737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.742762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.742847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.742872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.742958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.742986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.743133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.743172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.743263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.743291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.743392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.743419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.743501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.648 [2024-11-15 11:44:30.743527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.648 qpair failed and we were unable to recover it. 00:25:50.648 [2024-11-15 11:44:30.743610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.743636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.743717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.743743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.743839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.743864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.743951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.743980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.744072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.744099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.744185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.744213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.744316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.744344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.744431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.744457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.744535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.744561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.744640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.744666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.744750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.744778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.744871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.744897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.744984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.745011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.745103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.745129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.745218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.745247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.745336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.745372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.745453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.745478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.745590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.745616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.745692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.745717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.745801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.745826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.745907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.745933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.746024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.746063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.746150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.746177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.746293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.746329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.746419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.746446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.746534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.746561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.746654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.746681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.746764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.746790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.746880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.746912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.746992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.747018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.747103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.747130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.747216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.747246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.747340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.747369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.649 [2024-11-15 11:44:30.747461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.649 [2024-11-15 11:44:30.747486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.649 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.747572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.747597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.747677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.747703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.747791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.747816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.747927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.747954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.748038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.748064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.748160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.748188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.748270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.748296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.748403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.748430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.748516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.748542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.748657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.748684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.748794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.748820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.748904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.748930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.749018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.749045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.749129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.749157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.749287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.749320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.749406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.749432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.749516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.749543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.749634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.749659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.749744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.749770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.749882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.749908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.749990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.750016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.750104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.750131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.750324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.750351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.750430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.750456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.750539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.750565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.750674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.650 [2024-11-15 11:44:30.750700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.650 qpair failed and we were unable to recover it. 00:25:50.650 [2024-11-15 11:44:30.750799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.750825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.750906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.750934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.751046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.751072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.751161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.751189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.751271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.751296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.751401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.751426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.751513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.751540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.751626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.751651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.751736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.751767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.751848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.751874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.751960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.751985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.752062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.752087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.752163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.752188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.752277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.752315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.752428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.752467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.752556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.752583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.752665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.752690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.752782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.752807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.752900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.752928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.753017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.753043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.753132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.753158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.753240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.753266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.753361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.753387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.753479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.753504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.753588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.753615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.753704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.753729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.753819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.753844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.753938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.753965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.754047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.754072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.754156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.754180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.754258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.754282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.754396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.754421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.754503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.754528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.754614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.754640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.754723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.754749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.754836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.651 [2024-11-15 11:44:30.754866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.651 qpair failed and we were unable to recover it. 00:25:50.651 [2024-11-15 11:44:30.754954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.754980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.755076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.755115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.755228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.755267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.755378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.755406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.755491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.755518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.755602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.755628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.755711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.755737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.755827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.755854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.755964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.755991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.756075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.756101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.756182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.756208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.756310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.756340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.756433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.756460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.756549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.756576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.756662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.756689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.756783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.756810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.756924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.756952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.757035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.757061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.757152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.757178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.757262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.757288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.757382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.757407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.757498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.757523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.757605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.757632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.757763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.757791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.757877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.757903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.757990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.758016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.758139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.758165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.758293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.758338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.758426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.758453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.758534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.758559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.758665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.758690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.758780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.758807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.758888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.758913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.758992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.652 [2024-11-15 11:44:30.759017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.652 qpair failed and we were unable to recover it. 00:25:50.652 [2024-11-15 11:44:30.759100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.759125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.759237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.759263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.759359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.759385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.759483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.759508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.759587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.759616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.759727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.759758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.759839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.759866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.759950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.759976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.760075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.760115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.760220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.760259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.760401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.760427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.760515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.760540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.760620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.760646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.760724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.760749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.760832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.760860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.760944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.760974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.761064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.761104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.761193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.761220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.761311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.761339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.761474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.761500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.761580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.761605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.761692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.761719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.761811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.761836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.761932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.761958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.762069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.762095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.762182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.762211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.762295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.762327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.762414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.762439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.762523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.762548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.762628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.762653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.762778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.762817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.762903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.762929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.763030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.763075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.653 [2024-11-15 11:44:30.763164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.653 [2024-11-15 11:44:30.763190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.653 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.763276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.763309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.763437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.763462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.763541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.763567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.763683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.763708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.763790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.763816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.763903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.763932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.764021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.764050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.764174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.764210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.764298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.764330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.764410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.764436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.764529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.764555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.764638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.764664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.764763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.764790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.764917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.764958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.765062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.765099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.765195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.765221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.765310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.765336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.765414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.765439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.765521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.765546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.765637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.765662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.765749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.765774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.765883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.765908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.765987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.766012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.766144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.766172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.766280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.766313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.766423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.766458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.766574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.766601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.766685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.766711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.766795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.766820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.766907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.766932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.767036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.654 [2024-11-15 11:44:30.767075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.654 qpair failed and we were unable to recover it. 00:25:50.654 [2024-11-15 11:44:30.767172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.767200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.767285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.767320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.767404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.767429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.767523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.767548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.767635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.767660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.767739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.767764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.767852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.767881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.767965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.767992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.768082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.768110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.768227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.768253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.768343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.768372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.768460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.768486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.768570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.768595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.768677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.768703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.768787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.768813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.768902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.768928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.769012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.769037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.769130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.769158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.769241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.769267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.769368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.769397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.769480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.769506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.769595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.769621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.769704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.769730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.769815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.769842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.769963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.769989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.770073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.770098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.770185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.770210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.770299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.770332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.770420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.770447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.770534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.770559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.655 [2024-11-15 11:44:30.770640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.655 [2024-11-15 11:44:30.770666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.655 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.770778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.770804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.770897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.770925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.771018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.771044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.771146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.771191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.771281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.771317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.771409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.771436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.771518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.771544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.771624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.771648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.771737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.771762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.771843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.771871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.771989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.772018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.772102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.772128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.772211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.772238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.772355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.772382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.772470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.772498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.772580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.772606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.772693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.772718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.772839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.772864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.772948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.772972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.773050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.773075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.773160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.773191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.773284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.773324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.773413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.773438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.773544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.773568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.773658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.773683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.773764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.773789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.773873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.773899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.773978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.774003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.774082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.774107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.774202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.774228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.774315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.656 [2024-11-15 11:44:30.774345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.656 qpair failed and we were unable to recover it. 00:25:50.656 [2024-11-15 11:44:30.774426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.774451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.774539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.774563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.774645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.774670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.774756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.774782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.774864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.774890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.774994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.775033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.775163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.775202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.775296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.775336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.775448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.775475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.775559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.775586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.775670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.775696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.775781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.775808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.775898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.775923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.776032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.776056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.776139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.776163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.776252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.776278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.776403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.776429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.776514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.776539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.776647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.776673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.776755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.776781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.776867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.776895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.777001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.777040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.777143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.777182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.777271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.777297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.777388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.777413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.777493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.777518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.777603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.777636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.777721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.777746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.777830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.777855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.777939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.777968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.778062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.778091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.778182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.778212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.778299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.778331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.778442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.657 [2024-11-15 11:44:30.778468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.657 qpair failed and we were unable to recover it. 00:25:50.657 [2024-11-15 11:44:30.778556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.778582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.778677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.778703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.778791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.778817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.778931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.778958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.779038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.779063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.779150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.779175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.779270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.779298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.779391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 Malloc0 00:25:50.658 [2024-11-15 11:44:30.779417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.779519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.779547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.779641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.779668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.779784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.658 [2024-11-15 11:44:30.779810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.779889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.779915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:50.658 [2024-11-15 11:44:30.780005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.780032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.658 [2024-11-15 11:44:30.780120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.780147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.658 [2024-11-15 11:44:30.780251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.780290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.780398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.780427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.780519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.780546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.780635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.780667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.780756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.780783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.780868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.780894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.780980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.781005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.781091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.781118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.781203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.781228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.781324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.781349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.781427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.781453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.781534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.781559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.781649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.781674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.781756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.781782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.781857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.781882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.658 qpair failed and we were unable to recover it. 00:25:50.658 [2024-11-15 11:44:30.781991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.658 [2024-11-15 11:44:30.782019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.782106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.782134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.782232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.782272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.782365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.782393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.782509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.782535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.782640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.782666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.782745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.782769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.782853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.782845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.659 [2024-11-15 11:44:30.782880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.782973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.783001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.783086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.783112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.783196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.783222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.783309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.783336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.783421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.783446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.783529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.783555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.783638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.783663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.783776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.783802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.783896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.783922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.784006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.784032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.784115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.784140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.784229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.784257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.784349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.784378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.784471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.784497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.784609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.784635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.784714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.784740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.784826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.784853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.784936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.784962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.785055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.785094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.785194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.785233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.785322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.785355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.785488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.785514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.785606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.785633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.785728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.785755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.659 [2024-11-15 11:44:30.785839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.659 [2024-11-15 11:44:30.785865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.659 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.785951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.785977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.786070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.786096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.786181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.786209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.786293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.786326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.786413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.786439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.786523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.786549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.786640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.786667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.786788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.786814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.786934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.786960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.787054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.787081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.787166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.787192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.787280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.787314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.787413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.787442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.787578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.787617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.787707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.787735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.787844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.787870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.787984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.788010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.788087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.788113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.788192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.788218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.788321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.788360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.788454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.788482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.788573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.788601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.788693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.788721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.788810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.788835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.788921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.788946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.789035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.789061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.789143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.789168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.789263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.789291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.789401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.789429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.789530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.789568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.789666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.789693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.789778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.789804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.789884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.660 [2024-11-15 11:44:30.789909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.660 qpair failed and we were unable to recover it. 00:25:50.660 [2024-11-15 11:44:30.790025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.790050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.790132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.790157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.790236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.790260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.790364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.790391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.790477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.790502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.790585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.790610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.790688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.790713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.790798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.790823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.790902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.790927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.791008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.791033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.661 [2024-11-15 11:44:30.791123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.791148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.661 [2024-11-15 11:44:30.791230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.791258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.791354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.661 [2024-11-15 11:44:30.791383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.791467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.661 [2024-11-15 11:44:30.791492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.791577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.791604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.791697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.791722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.791810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.791836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.791917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.791942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.792046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.792086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.792188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.792227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.792314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.792341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.792432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.792457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.792541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.792567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.792652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.792678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.792759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.792785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.792869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.792894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.792975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.793000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.661 [2024-11-15 11:44:30.793082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.661 [2024-11-15 11:44:30.793107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.661 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.793239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.793279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.793396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.793427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.793519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.793545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.793630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.793656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.793773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.793800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.793881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.793908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.793995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.794023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.794162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.794191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.794276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.794313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.794401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.794427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.794511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.794537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.794616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.794642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.794727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.794754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.794849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.794876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.794965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.794994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.795084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.795111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.795197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.795223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.795317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.795344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.795429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.795456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.795572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.795598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.795677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.795703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.795786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.795814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.795895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.795921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.796050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.796090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.796182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.796210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.796291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.796323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.796411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.796444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.796532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.796558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.796642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.796669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.796753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.796779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.796862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.796888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.796967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.796993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.797077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.797104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.797205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.797245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.797338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.797366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.797447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.797473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.797563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.662 [2024-11-15 11:44:30.797589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.662 qpair failed and we were unable to recover it. 00:25:50.662 [2024-11-15 11:44:30.797672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.797698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.797777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.797803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.797884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.797909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.798013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.798052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.798142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.798170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.798286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.798320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.798409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.798435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.798519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.798546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.798628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.798653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.798739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.798765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.798854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.798880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.798972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.798999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.799086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.663 [2024-11-15 11:44:30.799114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.799209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.799235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:50.663 [2024-11-15 11:44:30.799317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.799343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.663 [2024-11-15 11:44:30.799426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.799453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.663 [2024-11-15 11:44:30.799537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.799563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.799647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.799673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.799751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.799777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.799858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.799883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.799969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.799996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.800075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.800102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.800195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.800233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.800321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.800349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.800440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.800466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.800548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.800573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.800652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.800678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.800763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.800790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.800885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.800911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.801025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.801055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.801143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.663 [2024-11-15 11:44:30.801170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.663 qpair failed and we were unable to recover it. 00:25:50.663 [2024-11-15 11:44:30.801259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.801287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.801380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.801407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.801493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.801518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.801598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.801624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.801707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.801733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.801814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.801839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.801912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.801938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.802022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.802050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.802158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.802197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.802285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.802318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.802414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.802440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.802522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.802547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.802626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.802651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.802737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.802764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.802855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.802883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.803011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.803049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.803144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.803169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.803249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.803274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.803368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.803393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.803477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.803502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.803580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.803606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.803695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.803720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.803797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.803823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.803913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.803945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.804024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.804050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.804136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.804162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.804243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.804268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.804370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.804409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.804511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.804550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.804642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.804668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.804750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.804776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.804855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.804880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.804958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.804983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.805088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.805113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.805203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.805233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.805323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.805351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.805440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.664 [2024-11-15 11:44:30.805467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.664 qpair failed and we were unable to recover it. 00:25:50.664 [2024-11-15 11:44:30.805549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.805575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.805657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.805683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.805766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.805792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.805882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.805908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.806008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.806034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.806153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.806179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.806258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.806284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.806383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.806411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.806497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.806523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.806632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.806658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.806740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.806766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.806848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.806874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.807004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.807031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.807120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.665 [2024-11-15 11:44:30.807147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.807239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.807263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.665 [2024-11-15 11:44:30.807358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.807384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.665 [2024-11-15 11:44:30.807470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.807495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.665 [2024-11-15 11:44:30.807627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.807653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.807742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.807769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.807852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.807877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.807970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.808008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.808109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.808137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.808251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.808278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.808382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.808409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.808495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.808521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.808610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.808637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.808723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.808749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.808828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.808853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.808949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.808978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.809064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.809090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.809172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.809199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.665 [2024-11-15 11:44:30.809278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.665 [2024-11-15 11:44:30.809314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.665 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.809400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.666 [2024-11-15 11:44:30.809426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.809518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.666 [2024-11-15 11:44:30.809547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.809632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.666 [2024-11-15 11:44:30.809658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.809737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.666 [2024-11-15 11:44:30.809763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.809848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.666 [2024-11-15 11:44:30.809875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd38000b90 with addr=10.0.0.2, port=4420 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.809968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.666 [2024-11-15 11:44:30.810007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b1fa0 with addr=10.0.0.2, port=4420 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.810097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.666 [2024-11-15 11:44:30.810126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd2c000b90 with addr=10.0.0.2, port=4420 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.810219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.666 [2024-11-15 11:44:30.810246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.810339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.666 [2024-11-15 11:44:30.810366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.810459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.666 [2024-11-15 11:44:30.810485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.810567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.666 [2024-11-15 11:44:30.810593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.810674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.666 [2024-11-15 11:44:30.810700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.810777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.666 [2024-11-15 11:44:30.810802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.810878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.666 [2024-11-15 11:44:30.810904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fdd30000b90 with addr=10.0.0.2, port=4420 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.811441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.666 [2024-11-15 11:44:30.813714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.666 [2024-11-15 11:44:30.813853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.666 [2024-11-15 11:44:30.813881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.666 [2024-11-15 11:44:30.813896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.666 [2024-11-15 11:44:30.813909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.666 [2024-11-15 11:44:30.813946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.666 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:50.666 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.666 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.666 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.666 11:44:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3039954 00:25:50.666 [2024-11-15 11:44:30.823501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.666 [2024-11-15 11:44:30.823590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.666 [2024-11-15 11:44:30.823617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.666 [2024-11-15 11:44:30.823631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.666 [2024-11-15 11:44:30.823645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.666 [2024-11-15 11:44:30.823675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.833530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.666 [2024-11-15 11:44:30.833618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.666 [2024-11-15 11:44:30.833645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.666 [2024-11-15 11:44:30.833659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.666 [2024-11-15 11:44:30.833672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.666 [2024-11-15 11:44:30.833702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.843564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.666 [2024-11-15 11:44:30.843661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.666 [2024-11-15 11:44:30.843687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.666 [2024-11-15 11:44:30.843701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.666 [2024-11-15 11:44:30.843715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.666 [2024-11-15 11:44:30.843746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.853473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.666 [2024-11-15 11:44:30.853570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.666 [2024-11-15 11:44:30.853596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.666 [2024-11-15 11:44:30.853612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.666 [2024-11-15 11:44:30.853626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.666 [2024-11-15 11:44:30.853656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.863513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.666 [2024-11-15 11:44:30.863629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.666 [2024-11-15 11:44:30.863655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.666 [2024-11-15 11:44:30.863669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.666 [2024-11-15 11:44:30.863682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.666 [2024-11-15 11:44:30.863711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.666 qpair failed and we were unable to recover it. 00:25:50.666 [2024-11-15 11:44:30.873549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.667 [2024-11-15 11:44:30.873640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.667 [2024-11-15 11:44:30.873666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.667 [2024-11-15 11:44:30.873681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.667 [2024-11-15 11:44:30.873693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.667 [2024-11-15 11:44:30.873725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.667 qpair failed and we were unable to recover it. 00:25:50.667 [2024-11-15 11:44:30.883621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.667 [2024-11-15 11:44:30.883714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.667 [2024-11-15 11:44:30.883740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.667 [2024-11-15 11:44:30.883755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.667 [2024-11-15 11:44:30.883767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.667 [2024-11-15 11:44:30.883798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.667 qpair failed and we were unable to recover it. 00:25:50.667 [2024-11-15 11:44:30.893614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.667 [2024-11-15 11:44:30.893707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.667 [2024-11-15 11:44:30.893734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.667 [2024-11-15 11:44:30.893748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.667 [2024-11-15 11:44:30.893761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.667 [2024-11-15 11:44:30.893791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.667 qpair failed and we were unable to recover it. 00:25:50.667 [2024-11-15 11:44:30.903619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.667 [2024-11-15 11:44:30.903718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.667 [2024-11-15 11:44:30.903750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.667 [2024-11-15 11:44:30.903765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.667 [2024-11-15 11:44:30.903779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.667 [2024-11-15 11:44:30.903809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.667 qpair failed and we were unable to recover it. 00:25:50.667 [2024-11-15 11:44:30.913745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.667 [2024-11-15 11:44:30.913825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.667 [2024-11-15 11:44:30.913850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.667 [2024-11-15 11:44:30.913864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.667 [2024-11-15 11:44:30.913877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.667 [2024-11-15 11:44:30.913907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.667 qpair failed and we were unable to recover it. 00:25:50.667 [2024-11-15 11:44:30.923681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.667 [2024-11-15 11:44:30.923774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.667 [2024-11-15 11:44:30.923800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.667 [2024-11-15 11:44:30.923814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.667 [2024-11-15 11:44:30.923827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.667 [2024-11-15 11:44:30.923857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.667 qpair failed and we were unable to recover it. 00:25:50.667 [2024-11-15 11:44:30.933706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.667 [2024-11-15 11:44:30.933798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.667 [2024-11-15 11:44:30.933823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.667 [2024-11-15 11:44:30.933838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.667 [2024-11-15 11:44:30.933851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.667 [2024-11-15 11:44:30.933881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.667 qpair failed and we were unable to recover it. 00:25:50.667 [2024-11-15 11:44:30.943768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.667 [2024-11-15 11:44:30.943894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.667 [2024-11-15 11:44:30.943920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.667 [2024-11-15 11:44:30.943934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.667 [2024-11-15 11:44:30.943953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.667 [2024-11-15 11:44:30.943983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.667 qpair failed and we were unable to recover it. 00:25:50.667 [2024-11-15 11:44:30.953760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.667 [2024-11-15 11:44:30.953842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.667 [2024-11-15 11:44:30.953868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.667 [2024-11-15 11:44:30.953882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.667 [2024-11-15 11:44:30.953895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.667 [2024-11-15 11:44:30.953926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.667 qpair failed and we were unable to recover it. 00:25:50.667 [2024-11-15 11:44:30.963801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.667 [2024-11-15 11:44:30.963890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.667 [2024-11-15 11:44:30.963916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.667 [2024-11-15 11:44:30.963930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.667 [2024-11-15 11:44:30.963942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.667 [2024-11-15 11:44:30.963974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.667 qpair failed and we were unable to recover it. 00:25:50.667 [2024-11-15 11:44:30.973845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.667 [2024-11-15 11:44:30.973931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.667 [2024-11-15 11:44:30.973957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.667 [2024-11-15 11:44:30.973971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.667 [2024-11-15 11:44:30.973983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.667 [2024-11-15 11:44:30.974014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.667 qpair failed and we were unable to recover it. 00:25:50.667 [2024-11-15 11:44:30.983835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.667 [2024-11-15 11:44:30.983923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.667 [2024-11-15 11:44:30.983949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.667 [2024-11-15 11:44:30.983963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.667 [2024-11-15 11:44:30.983976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.667 [2024-11-15 11:44:30.984005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.667 qpair failed and we were unable to recover it. 00:25:50.667 [2024-11-15 11:44:30.993885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.667 [2024-11-15 11:44:30.993967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.667 [2024-11-15 11:44:30.993993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.667 [2024-11-15 11:44:30.994007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.667 [2024-11-15 11:44:30.994020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.667 [2024-11-15 11:44:30.994050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.667 qpair failed and we were unable to recover it. 00:25:50.667 [2024-11-15 11:44:31.003958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.667 [2024-11-15 11:44:31.004054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.668 [2024-11-15 11:44:31.004082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.668 [2024-11-15 11:44:31.004098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.668 [2024-11-15 11:44:31.004111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.668 [2024-11-15 11:44:31.004141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.668 qpair failed and we were unable to recover it. 00:25:50.668 [2024-11-15 11:44:31.013966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.668 [2024-11-15 11:44:31.014052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.668 [2024-11-15 11:44:31.014078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.668 [2024-11-15 11:44:31.014092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.668 [2024-11-15 11:44:31.014105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.668 [2024-11-15 11:44:31.014135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.668 qpair failed and we were unable to recover it. 00:25:50.927 [2024-11-15 11:44:31.023960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.927 [2024-11-15 11:44:31.024049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.927 [2024-11-15 11:44:31.024083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.928 [2024-11-15 11:44:31.024105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.928 [2024-11-15 11:44:31.024124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.928 [2024-11-15 11:44:31.024173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.928 qpair failed and we were unable to recover it. 00:25:50.928 [2024-11-15 11:44:31.033987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.928 [2024-11-15 11:44:31.034076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.928 [2024-11-15 11:44:31.034119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.928 [2024-11-15 11:44:31.034134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.928 [2024-11-15 11:44:31.034147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.928 [2024-11-15 11:44:31.034179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.928 qpair failed and we were unable to recover it. 00:25:50.928 [2024-11-15 11:44:31.044037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.928 [2024-11-15 11:44:31.044132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.928 [2024-11-15 11:44:31.044159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.928 [2024-11-15 11:44:31.044173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.928 [2024-11-15 11:44:31.044186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.928 [2024-11-15 11:44:31.044216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.928 qpair failed and we were unable to recover it. 00:25:50.928 [2024-11-15 11:44:31.054037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.928 [2024-11-15 11:44:31.054122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.928 [2024-11-15 11:44:31.054149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.928 [2024-11-15 11:44:31.054163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.928 [2024-11-15 11:44:31.054177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.928 [2024-11-15 11:44:31.054209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.928 qpair failed and we were unable to recover it. 00:25:50.928 [2024-11-15 11:44:31.064058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.928 [2024-11-15 11:44:31.064144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.928 [2024-11-15 11:44:31.064170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.928 [2024-11-15 11:44:31.064184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.928 [2024-11-15 11:44:31.064196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.928 [2024-11-15 11:44:31.064226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.928 qpair failed and we were unable to recover it. 00:25:50.928 [2024-11-15 11:44:31.074081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.928 [2024-11-15 11:44:31.074164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.928 [2024-11-15 11:44:31.074191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.928 [2024-11-15 11:44:31.074206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.928 [2024-11-15 11:44:31.074224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.928 [2024-11-15 11:44:31.074255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.928 qpair failed and we were unable to recover it. 00:25:50.928 [2024-11-15 11:44:31.084138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.928 [2024-11-15 11:44:31.084229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.928 [2024-11-15 11:44:31.084255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.928 [2024-11-15 11:44:31.084269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.928 [2024-11-15 11:44:31.084282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.928 [2024-11-15 11:44:31.084319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.928 qpair failed and we were unable to recover it. 00:25:50.928 [2024-11-15 11:44:31.094143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.928 [2024-11-15 11:44:31.094238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.928 [2024-11-15 11:44:31.094265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.928 [2024-11-15 11:44:31.094280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.928 [2024-11-15 11:44:31.094296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.928 [2024-11-15 11:44:31.094353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.928 qpair failed and we were unable to recover it. 00:25:50.928 [2024-11-15 11:44:31.104179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.928 [2024-11-15 11:44:31.104269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.928 [2024-11-15 11:44:31.104295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.928 [2024-11-15 11:44:31.104319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.928 [2024-11-15 11:44:31.104333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.928 [2024-11-15 11:44:31.104364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.928 qpair failed and we were unable to recover it. 00:25:50.928 [2024-11-15 11:44:31.114233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.928 [2024-11-15 11:44:31.114331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.928 [2024-11-15 11:44:31.114358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.928 [2024-11-15 11:44:31.114372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.928 [2024-11-15 11:44:31.114385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.928 [2024-11-15 11:44:31.114415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.928 qpair failed and we were unable to recover it. 00:25:50.928 [2024-11-15 11:44:31.124242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.928 [2024-11-15 11:44:31.124335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.928 [2024-11-15 11:44:31.124360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.928 [2024-11-15 11:44:31.124374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.928 [2024-11-15 11:44:31.124387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.928 [2024-11-15 11:44:31.124417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.928 qpair failed and we were unable to recover it. 00:25:50.928 [2024-11-15 11:44:31.134361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.928 [2024-11-15 11:44:31.134451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.928 [2024-11-15 11:44:31.134477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.928 [2024-11-15 11:44:31.134491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.928 [2024-11-15 11:44:31.134504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.929 [2024-11-15 11:44:31.134534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.929 qpair failed and we were unable to recover it. 00:25:50.929 [2024-11-15 11:44:31.144291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.929 [2024-11-15 11:44:31.144398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.929 [2024-11-15 11:44:31.144425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.929 [2024-11-15 11:44:31.144439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.929 [2024-11-15 11:44:31.144451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.929 [2024-11-15 11:44:31.144481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.929 qpair failed and we were unable to recover it. 00:25:50.929 [2024-11-15 11:44:31.154344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.929 [2024-11-15 11:44:31.154428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.929 [2024-11-15 11:44:31.154454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.929 [2024-11-15 11:44:31.154468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.929 [2024-11-15 11:44:31.154481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.929 [2024-11-15 11:44:31.154510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.929 qpair failed and we were unable to recover it. 00:25:50.929 [2024-11-15 11:44:31.164374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.929 [2024-11-15 11:44:31.164513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.929 [2024-11-15 11:44:31.164544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.929 [2024-11-15 11:44:31.164559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.929 [2024-11-15 11:44:31.164572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.929 [2024-11-15 11:44:31.164602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.929 qpair failed and we were unable to recover it. 00:25:50.929 [2024-11-15 11:44:31.174375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.929 [2024-11-15 11:44:31.174460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.929 [2024-11-15 11:44:31.174485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.929 [2024-11-15 11:44:31.174500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.929 [2024-11-15 11:44:31.174513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.929 [2024-11-15 11:44:31.174542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.929 qpair failed and we were unable to recover it. 00:25:50.929 [2024-11-15 11:44:31.184405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.929 [2024-11-15 11:44:31.184493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.929 [2024-11-15 11:44:31.184519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.929 [2024-11-15 11:44:31.184533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.929 [2024-11-15 11:44:31.184546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.929 [2024-11-15 11:44:31.184590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.929 qpair failed and we were unable to recover it. 00:25:50.929 [2024-11-15 11:44:31.194519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.929 [2024-11-15 11:44:31.194604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.929 [2024-11-15 11:44:31.194632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.929 [2024-11-15 11:44:31.194646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.929 [2024-11-15 11:44:31.194659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.929 [2024-11-15 11:44:31.194689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.929 qpair failed and we were unable to recover it. 00:25:50.929 [2024-11-15 11:44:31.204597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.929 [2024-11-15 11:44:31.204689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.929 [2024-11-15 11:44:31.204715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.929 [2024-11-15 11:44:31.204737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.929 [2024-11-15 11:44:31.204751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.929 [2024-11-15 11:44:31.204782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.929 qpair failed and we were unable to recover it. 00:25:50.929 [2024-11-15 11:44:31.214495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.929 [2024-11-15 11:44:31.214585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.929 [2024-11-15 11:44:31.214611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.929 [2024-11-15 11:44:31.214625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.929 [2024-11-15 11:44:31.214638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.929 [2024-11-15 11:44:31.214667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.929 qpair failed and we were unable to recover it. 00:25:50.929 [2024-11-15 11:44:31.224529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.929 [2024-11-15 11:44:31.224613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.929 [2024-11-15 11:44:31.224638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.929 [2024-11-15 11:44:31.224652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.929 [2024-11-15 11:44:31.224666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.929 [2024-11-15 11:44:31.224697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.929 qpair failed and we were unable to recover it. 00:25:50.929 [2024-11-15 11:44:31.234558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.929 [2024-11-15 11:44:31.234637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.929 [2024-11-15 11:44:31.234663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.929 [2024-11-15 11:44:31.234677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.929 [2024-11-15 11:44:31.234690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.929 [2024-11-15 11:44:31.234720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.929 qpair failed and we were unable to recover it. 00:25:50.929 [2024-11-15 11:44:31.244610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.929 [2024-11-15 11:44:31.244706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.929 [2024-11-15 11:44:31.244735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.929 [2024-11-15 11:44:31.244755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.929 [2024-11-15 11:44:31.244769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.929 [2024-11-15 11:44:31.244806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.929 qpair failed and we were unable to recover it. 00:25:50.929 [2024-11-15 11:44:31.254620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.930 [2024-11-15 11:44:31.254705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.930 [2024-11-15 11:44:31.254732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.930 [2024-11-15 11:44:31.254746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.930 [2024-11-15 11:44:31.254759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.930 [2024-11-15 11:44:31.254788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.930 qpair failed and we were unable to recover it. 00:25:50.930 [2024-11-15 11:44:31.264678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.930 [2024-11-15 11:44:31.264762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.930 [2024-11-15 11:44:31.264788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.930 [2024-11-15 11:44:31.264802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.930 [2024-11-15 11:44:31.264815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.930 [2024-11-15 11:44:31.264845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.930 qpair failed and we were unable to recover it. 00:25:50.930 [2024-11-15 11:44:31.274698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.930 [2024-11-15 11:44:31.274786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.930 [2024-11-15 11:44:31.274815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.930 [2024-11-15 11:44:31.274831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.930 [2024-11-15 11:44:31.274844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.930 [2024-11-15 11:44:31.274876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.930 qpair failed and we were unable to recover it. 00:25:50.930 [2024-11-15 11:44:31.284800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.930 [2024-11-15 11:44:31.284895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.930 [2024-11-15 11:44:31.284921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.930 [2024-11-15 11:44:31.284935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.930 [2024-11-15 11:44:31.284948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.930 [2024-11-15 11:44:31.284978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.930 qpair failed and we were unable to recover it. 00:25:50.930 [2024-11-15 11:44:31.294792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.930 [2024-11-15 11:44:31.294879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.930 [2024-11-15 11:44:31.294908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.930 [2024-11-15 11:44:31.294923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.930 [2024-11-15 11:44:31.294936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.930 [2024-11-15 11:44:31.294967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.930 qpair failed and we were unable to recover it. 00:25:50.930 [2024-11-15 11:44:31.304757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.930 [2024-11-15 11:44:31.304843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.930 [2024-11-15 11:44:31.304869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.930 [2024-11-15 11:44:31.304884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.930 [2024-11-15 11:44:31.304897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.930 [2024-11-15 11:44:31.304939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.930 qpair failed and we were unable to recover it. 00:25:50.930 [2024-11-15 11:44:31.314779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.930 [2024-11-15 11:44:31.314864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.930 [2024-11-15 11:44:31.314890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.930 [2024-11-15 11:44:31.314904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.930 [2024-11-15 11:44:31.314917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.930 [2024-11-15 11:44:31.314948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.930 qpair failed and we were unable to recover it. 00:25:50.930 [2024-11-15 11:44:31.324823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.930 [2024-11-15 11:44:31.324922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.930 [2024-11-15 11:44:31.324948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.930 [2024-11-15 11:44:31.324962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.930 [2024-11-15 11:44:31.324975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.930 [2024-11-15 11:44:31.325007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.930 qpair failed and we were unable to recover it. 00:25:50.930 [2024-11-15 11:44:31.334865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.930 [2024-11-15 11:44:31.334951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.930 [2024-11-15 11:44:31.334980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.930 [2024-11-15 11:44:31.335003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.930 [2024-11-15 11:44:31.335018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.930 [2024-11-15 11:44:31.335049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.930 qpair failed and we were unable to recover it. 00:25:50.930 [2024-11-15 11:44:31.344890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.930 [2024-11-15 11:44:31.344991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.930 [2024-11-15 11:44:31.345017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.930 [2024-11-15 11:44:31.345032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.930 [2024-11-15 11:44:31.345045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:50.930 [2024-11-15 11:44:31.345076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:50.930 qpair failed and we were unable to recover it. 00:25:51.190 [2024-11-15 11:44:31.354902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.190 [2024-11-15 11:44:31.354985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.190 [2024-11-15 11:44:31.355013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.190 [2024-11-15 11:44:31.355028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.190 [2024-11-15 11:44:31.355041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.190 [2024-11-15 11:44:31.355073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.190 qpair failed and we were unable to recover it. 00:25:51.190 [2024-11-15 11:44:31.365006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.190 [2024-11-15 11:44:31.365116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.190 [2024-11-15 11:44:31.365143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.190 [2024-11-15 11:44:31.365157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.190 [2024-11-15 11:44:31.365170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.190 [2024-11-15 11:44:31.365201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.190 qpair failed and we were unable to recover it. 00:25:51.190 [2024-11-15 11:44:31.375026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.190 [2024-11-15 11:44:31.375117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.190 [2024-11-15 11:44:31.375143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.190 [2024-11-15 11:44:31.375157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.190 [2024-11-15 11:44:31.375171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.190 [2024-11-15 11:44:31.375207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.190 qpair failed and we were unable to recover it. 00:25:51.190 [2024-11-15 11:44:31.385048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.190 [2024-11-15 11:44:31.385148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.191 [2024-11-15 11:44:31.385174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.191 [2024-11-15 11:44:31.385188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.191 [2024-11-15 11:44:31.385201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.191 [2024-11-15 11:44:31.385232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.191 qpair failed and we were unable to recover it. 00:25:51.191 [2024-11-15 11:44:31.395155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.191 [2024-11-15 11:44:31.395235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.191 [2024-11-15 11:44:31.395260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.191 [2024-11-15 11:44:31.395275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.191 [2024-11-15 11:44:31.395288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.191 [2024-11-15 11:44:31.395325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.191 qpair failed and we were unable to recover it. 00:25:51.191 [2024-11-15 11:44:31.405091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.191 [2024-11-15 11:44:31.405183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.191 [2024-11-15 11:44:31.405209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.191 [2024-11-15 11:44:31.405224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.191 [2024-11-15 11:44:31.405237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.191 [2024-11-15 11:44:31.405267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.191 qpair failed and we were unable to recover it. 00:25:51.191 [2024-11-15 11:44:31.415090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.191 [2024-11-15 11:44:31.415174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.191 [2024-11-15 11:44:31.415202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.191 [2024-11-15 11:44:31.415219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.191 [2024-11-15 11:44:31.415233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.191 [2024-11-15 11:44:31.415263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.191 qpair failed and we were unable to recover it. 00:25:51.191 [2024-11-15 11:44:31.425185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.191 [2024-11-15 11:44:31.425264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.191 [2024-11-15 11:44:31.425288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.191 [2024-11-15 11:44:31.425311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.191 [2024-11-15 11:44:31.425326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.191 [2024-11-15 11:44:31.425356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.191 qpair failed and we were unable to recover it. 00:25:51.191 [2024-11-15 11:44:31.435122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.191 [2024-11-15 11:44:31.435199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.191 [2024-11-15 11:44:31.435224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.191 [2024-11-15 11:44:31.435238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.191 [2024-11-15 11:44:31.435251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.191 [2024-11-15 11:44:31.435282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.191 qpair failed and we were unable to recover it. 00:25:51.191 [2024-11-15 11:44:31.445187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.191 [2024-11-15 11:44:31.445300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.191 [2024-11-15 11:44:31.445335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.191 [2024-11-15 11:44:31.445350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.191 [2024-11-15 11:44:31.445363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.191 [2024-11-15 11:44:31.445392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.191 qpair failed and we were unable to recover it. 00:25:51.191 [2024-11-15 11:44:31.455188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.191 [2024-11-15 11:44:31.455278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.191 [2024-11-15 11:44:31.455314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.191 [2024-11-15 11:44:31.455332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.191 [2024-11-15 11:44:31.455347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.191 [2024-11-15 11:44:31.455376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.191 qpair failed and we were unable to recover it. 00:25:51.191 [2024-11-15 11:44:31.465215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.191 [2024-11-15 11:44:31.465312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.191 [2024-11-15 11:44:31.465342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.191 [2024-11-15 11:44:31.465357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.191 [2024-11-15 11:44:31.465369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.191 [2024-11-15 11:44:31.465399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.191 qpair failed and we were unable to recover it. 00:25:51.191 [2024-11-15 11:44:31.475250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.191 [2024-11-15 11:44:31.475367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.191 [2024-11-15 11:44:31.475393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.191 [2024-11-15 11:44:31.475407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.191 [2024-11-15 11:44:31.475420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.191 [2024-11-15 11:44:31.475449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.191 qpair failed and we were unable to recover it. 00:25:51.191 [2024-11-15 11:44:31.485330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.191 [2024-11-15 11:44:31.485424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.191 [2024-11-15 11:44:31.485449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.191 [2024-11-15 11:44:31.485463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.192 [2024-11-15 11:44:31.485476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.192 [2024-11-15 11:44:31.485506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.192 qpair failed and we were unable to recover it. 00:25:51.192 [2024-11-15 11:44:31.495292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.192 [2024-11-15 11:44:31.495388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.192 [2024-11-15 11:44:31.495413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.192 [2024-11-15 11:44:31.495427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.192 [2024-11-15 11:44:31.495440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.192 [2024-11-15 11:44:31.495469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.192 qpair failed and we were unable to recover it. 00:25:51.192 [2024-11-15 11:44:31.505313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.192 [2024-11-15 11:44:31.505402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.192 [2024-11-15 11:44:31.505428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.192 [2024-11-15 11:44:31.505442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.192 [2024-11-15 11:44:31.505460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.192 [2024-11-15 11:44:31.505492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.192 qpair failed and we were unable to recover it. 00:25:51.192 [2024-11-15 11:44:31.515350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.192 [2024-11-15 11:44:31.515471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.192 [2024-11-15 11:44:31.515496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.192 [2024-11-15 11:44:31.515510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.192 [2024-11-15 11:44:31.515522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.192 [2024-11-15 11:44:31.515552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.192 qpair failed and we were unable to recover it. 00:25:51.192 [2024-11-15 11:44:31.525408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.192 [2024-11-15 11:44:31.525518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.192 [2024-11-15 11:44:31.525544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.192 [2024-11-15 11:44:31.525559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.192 [2024-11-15 11:44:31.525571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.192 [2024-11-15 11:44:31.525604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.192 qpair failed and we were unable to recover it. 00:25:51.192 [2024-11-15 11:44:31.535436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.192 [2024-11-15 11:44:31.535527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.192 [2024-11-15 11:44:31.535553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.192 [2024-11-15 11:44:31.535567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.192 [2024-11-15 11:44:31.535579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.192 [2024-11-15 11:44:31.535621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.192 qpair failed and we were unable to recover it. 00:25:51.192 [2024-11-15 11:44:31.545444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.192 [2024-11-15 11:44:31.545537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.192 [2024-11-15 11:44:31.545563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.192 [2024-11-15 11:44:31.545577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.192 [2024-11-15 11:44:31.545589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.192 [2024-11-15 11:44:31.545620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.192 qpair failed and we were unable to recover it. 00:25:51.192 [2024-11-15 11:44:31.555461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.192 [2024-11-15 11:44:31.555549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.192 [2024-11-15 11:44:31.555575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.192 [2024-11-15 11:44:31.555589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.192 [2024-11-15 11:44:31.555601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.192 [2024-11-15 11:44:31.555632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.192 qpair failed and we were unable to recover it. 00:25:51.192 [2024-11-15 11:44:31.565542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.192 [2024-11-15 11:44:31.565643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.192 [2024-11-15 11:44:31.565672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.192 [2024-11-15 11:44:31.565686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.192 [2024-11-15 11:44:31.565699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.192 [2024-11-15 11:44:31.565729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.192 qpair failed and we were unable to recover it. 00:25:51.192 [2024-11-15 11:44:31.575552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.192 [2024-11-15 11:44:31.575658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.192 [2024-11-15 11:44:31.575684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.192 [2024-11-15 11:44:31.575699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.192 [2024-11-15 11:44:31.575712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.192 [2024-11-15 11:44:31.575745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.192 qpair failed and we were unable to recover it. 00:25:51.192 [2024-11-15 11:44:31.585551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.192 [2024-11-15 11:44:31.585631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.192 [2024-11-15 11:44:31.585657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.192 [2024-11-15 11:44:31.585671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.192 [2024-11-15 11:44:31.585684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.193 [2024-11-15 11:44:31.585714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.193 qpair failed and we were unable to recover it. 00:25:51.193 [2024-11-15 11:44:31.595918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.193 [2024-11-15 11:44:31.596083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.193 [2024-11-15 11:44:31.596141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.193 [2024-11-15 11:44:31.596156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.193 [2024-11-15 11:44:31.596170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.193 [2024-11-15 11:44:31.596215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.193 qpair failed and we were unable to recover it. 00:25:51.193 [2024-11-15 11:44:31.605745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.193 [2024-11-15 11:44:31.605836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.193 [2024-11-15 11:44:31.605861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.193 [2024-11-15 11:44:31.605875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.193 [2024-11-15 11:44:31.605889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.193 [2024-11-15 11:44:31.605918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.193 qpair failed and we were unable to recover it. 00:25:51.452 [2024-11-15 11:44:31.615728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.452 [2024-11-15 11:44:31.615829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.452 [2024-11-15 11:44:31.615857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.452 [2024-11-15 11:44:31.615872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.452 [2024-11-15 11:44:31.615885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.452 [2024-11-15 11:44:31.615916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.452 qpair failed and we were unable to recover it. 00:25:51.452 [2024-11-15 11:44:31.625724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.452 [2024-11-15 11:44:31.625802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.452 [2024-11-15 11:44:31.625830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.452 [2024-11-15 11:44:31.625844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.452 [2024-11-15 11:44:31.625857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.452 [2024-11-15 11:44:31.625901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.452 qpair failed and we were unable to recover it. 00:25:51.452 [2024-11-15 11:44:31.635713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.452 [2024-11-15 11:44:31.635804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.452 [2024-11-15 11:44:31.635830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.452 [2024-11-15 11:44:31.635844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.452 [2024-11-15 11:44:31.635863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.452 [2024-11-15 11:44:31.635894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.452 qpair failed and we were unable to recover it. 00:25:51.452 [2024-11-15 11:44:31.645766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.452 [2024-11-15 11:44:31.645859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.452 [2024-11-15 11:44:31.645887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.452 [2024-11-15 11:44:31.645903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.452 [2024-11-15 11:44:31.645917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.452 [2024-11-15 11:44:31.645947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.452 qpair failed and we were unable to recover it. 00:25:51.452 [2024-11-15 11:44:31.655850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.452 [2024-11-15 11:44:31.655944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.452 [2024-11-15 11:44:31.655969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.452 [2024-11-15 11:44:31.655985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.452 [2024-11-15 11:44:31.655999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.452 [2024-11-15 11:44:31.656029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.453 qpair failed and we were unable to recover it. 00:25:51.453 [2024-11-15 11:44:31.665825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.453 [2024-11-15 11:44:31.665904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.453 [2024-11-15 11:44:31.665930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.453 [2024-11-15 11:44:31.665945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.453 [2024-11-15 11:44:31.665957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.453 [2024-11-15 11:44:31.665986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.453 qpair failed and we were unable to recover it. 00:25:51.453 [2024-11-15 11:44:31.675850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.453 [2024-11-15 11:44:31.675935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.453 [2024-11-15 11:44:31.675961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.453 [2024-11-15 11:44:31.675975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.453 [2024-11-15 11:44:31.675988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.453 [2024-11-15 11:44:31.676019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.453 qpair failed and we were unable to recover it. 00:25:51.453 [2024-11-15 11:44:31.685891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.453 [2024-11-15 11:44:31.685981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.453 [2024-11-15 11:44:31.686010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.453 [2024-11-15 11:44:31.686025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.453 [2024-11-15 11:44:31.686038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.453 [2024-11-15 11:44:31.686080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.453 qpair failed and we were unable to recover it. 00:25:51.453 [2024-11-15 11:44:31.695887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.453 [2024-11-15 11:44:31.696004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.453 [2024-11-15 11:44:31.696030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.453 [2024-11-15 11:44:31.696044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.453 [2024-11-15 11:44:31.696058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.453 [2024-11-15 11:44:31.696089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.453 qpair failed and we were unable to recover it. 00:25:51.453 [2024-11-15 11:44:31.705951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.453 [2024-11-15 11:44:31.706076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.453 [2024-11-15 11:44:31.706102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.453 [2024-11-15 11:44:31.706116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.453 [2024-11-15 11:44:31.706129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.453 [2024-11-15 11:44:31.706160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.453 qpair failed and we were unable to recover it. 00:25:51.453 [2024-11-15 11:44:31.715972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.453 [2024-11-15 11:44:31.716061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.453 [2024-11-15 11:44:31.716087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.453 [2024-11-15 11:44:31.716101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.453 [2024-11-15 11:44:31.716114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.453 [2024-11-15 11:44:31.716144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.453 qpair failed and we were unable to recover it. 00:25:51.453 [2024-11-15 11:44:31.725976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.453 [2024-11-15 11:44:31.726066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.453 [2024-11-15 11:44:31.726097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.453 [2024-11-15 11:44:31.726111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.453 [2024-11-15 11:44:31.726124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.453 [2024-11-15 11:44:31.726154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.453 qpair failed and we were unable to recover it. 00:25:51.453 [2024-11-15 11:44:31.736020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.453 [2024-11-15 11:44:31.736117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.453 [2024-11-15 11:44:31.736145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.453 [2024-11-15 11:44:31.736160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.453 [2024-11-15 11:44:31.736173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.453 [2024-11-15 11:44:31.736205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.453 qpair failed and we were unable to recover it. 00:25:51.453 [2024-11-15 11:44:31.746041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.453 [2024-11-15 11:44:31.746141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.454 [2024-11-15 11:44:31.746166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.454 [2024-11-15 11:44:31.746181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.454 [2024-11-15 11:44:31.746194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.454 [2024-11-15 11:44:31.746223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.454 qpair failed and we were unable to recover it. 00:25:51.454 [2024-11-15 11:44:31.756091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.454 [2024-11-15 11:44:31.756187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.454 [2024-11-15 11:44:31.756216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.454 [2024-11-15 11:44:31.756232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.454 [2024-11-15 11:44:31.756245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.454 [2024-11-15 11:44:31.756276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.454 qpair failed and we were unable to recover it. 00:25:51.454 [2024-11-15 11:44:31.766116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.454 [2024-11-15 11:44:31.766208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.454 [2024-11-15 11:44:31.766233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.454 [2024-11-15 11:44:31.766254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.454 [2024-11-15 11:44:31.766268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.454 [2024-11-15 11:44:31.766297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.454 qpair failed and we were unable to recover it. 00:25:51.454 [2024-11-15 11:44:31.776116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.454 [2024-11-15 11:44:31.776201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.454 [2024-11-15 11:44:31.776227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.454 [2024-11-15 11:44:31.776241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.454 [2024-11-15 11:44:31.776254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.454 [2024-11-15 11:44:31.776284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.454 qpair failed and we were unable to recover it. 00:25:51.454 [2024-11-15 11:44:31.786126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.454 [2024-11-15 11:44:31.786209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.454 [2024-11-15 11:44:31.786234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.454 [2024-11-15 11:44:31.786249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.454 [2024-11-15 11:44:31.786262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.454 [2024-11-15 11:44:31.786292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.454 qpair failed and we were unable to recover it. 00:25:51.454 [2024-11-15 11:44:31.796245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.454 [2024-11-15 11:44:31.796337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.454 [2024-11-15 11:44:31.796364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.454 [2024-11-15 11:44:31.796378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.454 [2024-11-15 11:44:31.796390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.454 [2024-11-15 11:44:31.796422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.454 qpair failed and we were unable to recover it. 00:25:51.454 [2024-11-15 11:44:31.806224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.454 [2024-11-15 11:44:31.806348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.454 [2024-11-15 11:44:31.806374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.454 [2024-11-15 11:44:31.806389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.454 [2024-11-15 11:44:31.806402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.454 [2024-11-15 11:44:31.806438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.454 qpair failed and we were unable to recover it. 00:25:51.454 [2024-11-15 11:44:31.816223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.454 [2024-11-15 11:44:31.816320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.454 [2024-11-15 11:44:31.816346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.454 [2024-11-15 11:44:31.816360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.454 [2024-11-15 11:44:31.816374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.454 [2024-11-15 11:44:31.816406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.454 qpair failed and we were unable to recover it. 00:25:51.454 [2024-11-15 11:44:31.826285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.454 [2024-11-15 11:44:31.826395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.454 [2024-11-15 11:44:31.826420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.454 [2024-11-15 11:44:31.826435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.454 [2024-11-15 11:44:31.826448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.454 [2024-11-15 11:44:31.826481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.454 qpair failed and we were unable to recover it. 00:25:51.455 [2024-11-15 11:44:31.836361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.455 [2024-11-15 11:44:31.836441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.455 [2024-11-15 11:44:31.836467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.455 [2024-11-15 11:44:31.836481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.455 [2024-11-15 11:44:31.836494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.455 [2024-11-15 11:44:31.836526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.455 qpair failed and we were unable to recover it. 00:25:51.455 [2024-11-15 11:44:31.846361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.455 [2024-11-15 11:44:31.846473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.455 [2024-11-15 11:44:31.846498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.455 [2024-11-15 11:44:31.846512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.455 [2024-11-15 11:44:31.846525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.455 [2024-11-15 11:44:31.846555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.455 qpair failed and we were unable to recover it. 00:25:51.455 [2024-11-15 11:44:31.856365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.455 [2024-11-15 11:44:31.856457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.455 [2024-11-15 11:44:31.856482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.455 [2024-11-15 11:44:31.856496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.455 [2024-11-15 11:44:31.856508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.455 [2024-11-15 11:44:31.856539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.455 qpair failed and we were unable to recover it. 00:25:51.455 [2024-11-15 11:44:31.866415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.455 [2024-11-15 11:44:31.866500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.455 [2024-11-15 11:44:31.866526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.455 [2024-11-15 11:44:31.866540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.455 [2024-11-15 11:44:31.866553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.455 [2024-11-15 11:44:31.866582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.455 qpair failed and we were unable to recover it. 00:25:51.715 [2024-11-15 11:44:31.876431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.715 [2024-11-15 11:44:31.876551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.715 [2024-11-15 11:44:31.876579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.715 [2024-11-15 11:44:31.876594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.715 [2024-11-15 11:44:31.876607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.715 [2024-11-15 11:44:31.876638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.715 qpair failed and we were unable to recover it. 00:25:51.715 [2024-11-15 11:44:31.886433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.715 [2024-11-15 11:44:31.886526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.715 [2024-11-15 11:44:31.886553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.715 [2024-11-15 11:44:31.886569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.715 [2024-11-15 11:44:31.886583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.715 [2024-11-15 11:44:31.886614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.715 qpair failed and we were unable to recover it. 00:25:51.715 [2024-11-15 11:44:31.896484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.715 [2024-11-15 11:44:31.896601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.715 [2024-11-15 11:44:31.896627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.715 [2024-11-15 11:44:31.896648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.715 [2024-11-15 11:44:31.896662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.715 [2024-11-15 11:44:31.896692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.715 qpair failed and we were unable to recover it. 00:25:51.715 [2024-11-15 11:44:31.906502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.715 [2024-11-15 11:44:31.906588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.715 [2024-11-15 11:44:31.906613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.715 [2024-11-15 11:44:31.906628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.715 [2024-11-15 11:44:31.906641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.715 [2024-11-15 11:44:31.906672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.715 qpair failed and we were unable to recover it. 00:25:51.715 [2024-11-15 11:44:31.916549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.715 [2024-11-15 11:44:31.916638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.715 [2024-11-15 11:44:31.916664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.715 [2024-11-15 11:44:31.916678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.715 [2024-11-15 11:44:31.916691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.715 [2024-11-15 11:44:31.916721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.715 qpair failed and we were unable to recover it. 00:25:51.715 [2024-11-15 11:44:31.926590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.715 [2024-11-15 11:44:31.926696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.715 [2024-11-15 11:44:31.926722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.715 [2024-11-15 11:44:31.926736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.715 [2024-11-15 11:44:31.926749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.715 [2024-11-15 11:44:31.926780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.715 qpair failed and we were unable to recover it. 00:25:51.715 [2024-11-15 11:44:31.936595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.715 [2024-11-15 11:44:31.936685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.715 [2024-11-15 11:44:31.936711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.715 [2024-11-15 11:44:31.936725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.715 [2024-11-15 11:44:31.936738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.715 [2024-11-15 11:44:31.936774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.715 qpair failed and we were unable to recover it. 00:25:51.715 [2024-11-15 11:44:31.946633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.715 [2024-11-15 11:44:31.946753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.715 [2024-11-15 11:44:31.946779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.715 [2024-11-15 11:44:31.946794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.715 [2024-11-15 11:44:31.946806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.716 [2024-11-15 11:44:31.946836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.716 qpair failed and we were unable to recover it. 00:25:51.716 [2024-11-15 11:44:31.956700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.716 [2024-11-15 11:44:31.956789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.716 [2024-11-15 11:44:31.956814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.716 [2024-11-15 11:44:31.956829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.716 [2024-11-15 11:44:31.956841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.716 [2024-11-15 11:44:31.956871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.716 qpair failed and we were unable to recover it. 00:25:51.716 [2024-11-15 11:44:31.966688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.716 [2024-11-15 11:44:31.966782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.716 [2024-11-15 11:44:31.966807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.716 [2024-11-15 11:44:31.966821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.716 [2024-11-15 11:44:31.966834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.716 [2024-11-15 11:44:31.966864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.716 qpair failed and we were unable to recover it. 00:25:51.716 [2024-11-15 11:44:31.976726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.716 [2024-11-15 11:44:31.976811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.716 [2024-11-15 11:44:31.976837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.716 [2024-11-15 11:44:31.976851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.716 [2024-11-15 11:44:31.976864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.716 [2024-11-15 11:44:31.976894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.716 qpair failed and we were unable to recover it. 00:25:51.716 [2024-11-15 11:44:31.986855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.716 [2024-11-15 11:44:31.986942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.716 [2024-11-15 11:44:31.986968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.716 [2024-11-15 11:44:31.986982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.716 [2024-11-15 11:44:31.986995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.716 [2024-11-15 11:44:31.987025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.716 qpair failed and we were unable to recover it. 00:25:51.716 [2024-11-15 11:44:31.996784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.716 [2024-11-15 11:44:31.996908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.716 [2024-11-15 11:44:31.996933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.716 [2024-11-15 11:44:31.996947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.716 [2024-11-15 11:44:31.996960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.716 [2024-11-15 11:44:31.996990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.716 qpair failed and we were unable to recover it. 00:25:51.716 [2024-11-15 11:44:32.006778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.716 [2024-11-15 11:44:32.006866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.716 [2024-11-15 11:44:32.006891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.716 [2024-11-15 11:44:32.006905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.716 [2024-11-15 11:44:32.006918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.716 [2024-11-15 11:44:32.006949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.716 qpair failed and we were unable to recover it. 00:25:51.716 [2024-11-15 11:44:32.016813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.716 [2024-11-15 11:44:32.016898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.716 [2024-11-15 11:44:32.016924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.716 [2024-11-15 11:44:32.016939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.716 [2024-11-15 11:44:32.016951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.716 [2024-11-15 11:44:32.016980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.716 qpair failed and we were unable to recover it. 00:25:51.716 [2024-11-15 11:44:32.026872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.716 [2024-11-15 11:44:32.026971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.716 [2024-11-15 11:44:32.027002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.716 [2024-11-15 11:44:32.027017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.716 [2024-11-15 11:44:32.027031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.716 [2024-11-15 11:44:32.027062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.716 qpair failed and we were unable to recover it. 00:25:51.716 [2024-11-15 11:44:32.036863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.716 [2024-11-15 11:44:32.036946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.716 [2024-11-15 11:44:32.036972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.716 [2024-11-15 11:44:32.036986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.716 [2024-11-15 11:44:32.036999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.716 [2024-11-15 11:44:32.037031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.716 qpair failed and we were unable to recover it. 00:25:51.716 [2024-11-15 11:44:32.046922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.716 [2024-11-15 11:44:32.047038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.716 [2024-11-15 11:44:32.047064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.716 [2024-11-15 11:44:32.047079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.716 [2024-11-15 11:44:32.047091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.716 [2024-11-15 11:44:32.047124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.716 qpair failed and we were unable to recover it. 00:25:51.716 [2024-11-15 11:44:32.056931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.716 [2024-11-15 11:44:32.057016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.716 [2024-11-15 11:44:32.057042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.716 [2024-11-15 11:44:32.057056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.716 [2024-11-15 11:44:32.057069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.716 [2024-11-15 11:44:32.057100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.716 qpair failed and we were unable to recover it. 00:25:51.716 [2024-11-15 11:44:32.066994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.716 [2024-11-15 11:44:32.067078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.716 [2024-11-15 11:44:32.067105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.716 [2024-11-15 11:44:32.067119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.716 [2024-11-15 11:44:32.067138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.716 [2024-11-15 11:44:32.067170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.716 qpair failed and we were unable to recover it. 00:25:51.716 [2024-11-15 11:44:32.077119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.716 [2024-11-15 11:44:32.077254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.716 [2024-11-15 11:44:32.077279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.716 [2024-11-15 11:44:32.077294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.716 [2024-11-15 11:44:32.077313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.717 [2024-11-15 11:44:32.077345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.717 qpair failed and we were unable to recover it. 00:25:51.717 [2024-11-15 11:44:32.087043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.717 [2024-11-15 11:44:32.087132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.717 [2024-11-15 11:44:32.087158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.717 [2024-11-15 11:44:32.087172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.717 [2024-11-15 11:44:32.087185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.717 [2024-11-15 11:44:32.087216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.717 qpair failed and we were unable to recover it. 00:25:51.717 [2024-11-15 11:44:32.097036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.717 [2024-11-15 11:44:32.097120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.717 [2024-11-15 11:44:32.097146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.717 [2024-11-15 11:44:32.097160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.717 [2024-11-15 11:44:32.097173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.717 [2024-11-15 11:44:32.097203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.717 qpair failed and we were unable to recover it. 00:25:51.717 [2024-11-15 11:44:32.107115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.717 [2024-11-15 11:44:32.107209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.717 [2024-11-15 11:44:32.107235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.717 [2024-11-15 11:44:32.107249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.717 [2024-11-15 11:44:32.107263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.717 [2024-11-15 11:44:32.107292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.717 qpair failed and we were unable to recover it. 00:25:51.717 [2024-11-15 11:44:32.117142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.717 [2024-11-15 11:44:32.117265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.717 [2024-11-15 11:44:32.117290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.717 [2024-11-15 11:44:32.117313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.717 [2024-11-15 11:44:32.117329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.717 [2024-11-15 11:44:32.117358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.717 qpair failed and we were unable to recover it. 00:25:51.717 [2024-11-15 11:44:32.127157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.717 [2024-11-15 11:44:32.127245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.717 [2024-11-15 11:44:32.127270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.717 [2024-11-15 11:44:32.127284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.717 [2024-11-15 11:44:32.127296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.717 [2024-11-15 11:44:32.127333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.717 qpair failed and we were unable to recover it. 00:25:51.717 [2024-11-15 11:44:32.137161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.717 [2024-11-15 11:44:32.137254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.717 [2024-11-15 11:44:32.137281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.717 [2024-11-15 11:44:32.137296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.717 [2024-11-15 11:44:32.137324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.717 [2024-11-15 11:44:32.137365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.717 qpair failed and we were unable to recover it. 00:25:51.977 [2024-11-15 11:44:32.147208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.977 [2024-11-15 11:44:32.147288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.977 [2024-11-15 11:44:32.147326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.977 [2024-11-15 11:44:32.147342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.977 [2024-11-15 11:44:32.147355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.977 [2024-11-15 11:44:32.147389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.977 qpair failed and we were unable to recover it. 00:25:51.977 [2024-11-15 11:44:32.157266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.977 [2024-11-15 11:44:32.157355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.977 [2024-11-15 11:44:32.157390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.977 [2024-11-15 11:44:32.157406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.977 [2024-11-15 11:44:32.157419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.977 [2024-11-15 11:44:32.157449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.977 qpair failed and we were unable to recover it. 00:25:51.977 [2024-11-15 11:44:32.167257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.977 [2024-11-15 11:44:32.167373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.977 [2024-11-15 11:44:32.167399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.977 [2024-11-15 11:44:32.167413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.977 [2024-11-15 11:44:32.167426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.977 [2024-11-15 11:44:32.167457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.977 qpair failed and we were unable to recover it. 00:25:51.977 [2024-11-15 11:44:32.177272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.977 [2024-11-15 11:44:32.177369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.977 [2024-11-15 11:44:32.177395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.977 [2024-11-15 11:44:32.177409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.977 [2024-11-15 11:44:32.177422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.977 [2024-11-15 11:44:32.177452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.977 qpair failed and we were unable to recover it. 00:25:51.977 [2024-11-15 11:44:32.187285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.977 [2024-11-15 11:44:32.187377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.977 [2024-11-15 11:44:32.187403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.977 [2024-11-15 11:44:32.187417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.977 [2024-11-15 11:44:32.187430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.977 [2024-11-15 11:44:32.187461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.977 qpair failed and we were unable to recover it. 00:25:51.977 [2024-11-15 11:44:32.197420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.977 [2024-11-15 11:44:32.197504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.977 [2024-11-15 11:44:32.197530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.977 [2024-11-15 11:44:32.197545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.977 [2024-11-15 11:44:32.197564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.978 [2024-11-15 11:44:32.197594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.978 qpair failed and we were unable to recover it. 00:25:51.978 [2024-11-15 11:44:32.207425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.978 [2024-11-15 11:44:32.207525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.978 [2024-11-15 11:44:32.207551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.978 [2024-11-15 11:44:32.207566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.978 [2024-11-15 11:44:32.207579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.978 [2024-11-15 11:44:32.207612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.978 qpair failed and we were unable to recover it. 00:25:51.978 [2024-11-15 11:44:32.217420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.978 [2024-11-15 11:44:32.217507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.978 [2024-11-15 11:44:32.217533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.978 [2024-11-15 11:44:32.217547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.978 [2024-11-15 11:44:32.217559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.978 [2024-11-15 11:44:32.217590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.978 qpair failed and we were unable to recover it. 00:25:51.978 [2024-11-15 11:44:32.227419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.978 [2024-11-15 11:44:32.227541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.978 [2024-11-15 11:44:32.227567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.978 [2024-11-15 11:44:32.227581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.978 [2024-11-15 11:44:32.227595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.978 [2024-11-15 11:44:32.227625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.978 qpair failed and we were unable to recover it. 00:25:51.978 [2024-11-15 11:44:32.237449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.978 [2024-11-15 11:44:32.237532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.978 [2024-11-15 11:44:32.237557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.978 [2024-11-15 11:44:32.237571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.978 [2024-11-15 11:44:32.237585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.978 [2024-11-15 11:44:32.237615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.978 qpair failed and we were unable to recover it. 00:25:51.978 [2024-11-15 11:44:32.247500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.978 [2024-11-15 11:44:32.247596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.978 [2024-11-15 11:44:32.247624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.978 [2024-11-15 11:44:32.247639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.978 [2024-11-15 11:44:32.247653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.978 [2024-11-15 11:44:32.247683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.978 qpair failed and we were unable to recover it. 00:25:51.978 [2024-11-15 11:44:32.257528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.978 [2024-11-15 11:44:32.257615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.978 [2024-11-15 11:44:32.257641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.978 [2024-11-15 11:44:32.257655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.978 [2024-11-15 11:44:32.257668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.978 [2024-11-15 11:44:32.257698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.978 qpair failed and we were unable to recover it. 00:25:51.978 [2024-11-15 11:44:32.267578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.978 [2024-11-15 11:44:32.267699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.978 [2024-11-15 11:44:32.267724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.978 [2024-11-15 11:44:32.267738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.978 [2024-11-15 11:44:32.267751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.978 [2024-11-15 11:44:32.267782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.978 qpair failed and we were unable to recover it. 00:25:51.978 [2024-11-15 11:44:32.277583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.978 [2024-11-15 11:44:32.277670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.978 [2024-11-15 11:44:32.277696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.978 [2024-11-15 11:44:32.277710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.978 [2024-11-15 11:44:32.277723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.978 [2024-11-15 11:44:32.277752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.978 qpair failed and we were unable to recover it. 00:25:51.978 [2024-11-15 11:44:32.287628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.978 [2024-11-15 11:44:32.287720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.978 [2024-11-15 11:44:32.287754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.978 [2024-11-15 11:44:32.287770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.978 [2024-11-15 11:44:32.287783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.978 [2024-11-15 11:44:32.287814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.978 qpair failed and we were unable to recover it. 00:25:51.978 [2024-11-15 11:44:32.297634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.978 [2024-11-15 11:44:32.297718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.978 [2024-11-15 11:44:32.297744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.978 [2024-11-15 11:44:32.297759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.978 [2024-11-15 11:44:32.297772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.978 [2024-11-15 11:44:32.297802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.978 qpair failed and we were unable to recover it. 00:25:51.978 [2024-11-15 11:44:32.307710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.978 [2024-11-15 11:44:32.307835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.978 [2024-11-15 11:44:32.307861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.978 [2024-11-15 11:44:32.307875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.978 [2024-11-15 11:44:32.307888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.978 [2024-11-15 11:44:32.307917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.978 qpair failed and we were unable to recover it. 00:25:51.978 [2024-11-15 11:44:32.317668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.978 [2024-11-15 11:44:32.317749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.978 [2024-11-15 11:44:32.317774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.978 [2024-11-15 11:44:32.317789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.978 [2024-11-15 11:44:32.317802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.979 [2024-11-15 11:44:32.317832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.979 qpair failed and we were unable to recover it. 00:25:51.979 [2024-11-15 11:44:32.327718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.979 [2024-11-15 11:44:32.327813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.979 [2024-11-15 11:44:32.327838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.979 [2024-11-15 11:44:32.327858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.979 [2024-11-15 11:44:32.327871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.979 [2024-11-15 11:44:32.327901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.979 qpair failed and we were unable to recover it. 00:25:51.979 [2024-11-15 11:44:32.337743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.979 [2024-11-15 11:44:32.337833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.979 [2024-11-15 11:44:32.337858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.979 [2024-11-15 11:44:32.337872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.979 [2024-11-15 11:44:32.337885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.979 [2024-11-15 11:44:32.337914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.979 qpair failed and we were unable to recover it. 00:25:51.979 [2024-11-15 11:44:32.347761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.979 [2024-11-15 11:44:32.347840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.979 [2024-11-15 11:44:32.347867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.979 [2024-11-15 11:44:32.347881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.979 [2024-11-15 11:44:32.347894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.979 [2024-11-15 11:44:32.347935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.979 qpair failed and we were unable to recover it. 00:25:51.979 [2024-11-15 11:44:32.357912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.979 [2024-11-15 11:44:32.358002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.979 [2024-11-15 11:44:32.358031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.979 [2024-11-15 11:44:32.358049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.979 [2024-11-15 11:44:32.358063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.979 [2024-11-15 11:44:32.358094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.979 qpair failed and we were unable to recover it. 00:25:51.979 [2024-11-15 11:44:32.367829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.979 [2024-11-15 11:44:32.367936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.979 [2024-11-15 11:44:32.367962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.979 [2024-11-15 11:44:32.367977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.979 [2024-11-15 11:44:32.367990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.979 [2024-11-15 11:44:32.368026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.979 qpair failed and we were unable to recover it. 00:25:51.979 [2024-11-15 11:44:32.377884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.979 [2024-11-15 11:44:32.377970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.979 [2024-11-15 11:44:32.377996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.979 [2024-11-15 11:44:32.378010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.979 [2024-11-15 11:44:32.378025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.979 [2024-11-15 11:44:32.378054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.979 qpair failed and we were unable to recover it. 00:25:51.979 [2024-11-15 11:44:32.387878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.979 [2024-11-15 11:44:32.387962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.979 [2024-11-15 11:44:32.387987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.979 [2024-11-15 11:44:32.388001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.979 [2024-11-15 11:44:32.388014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.979 [2024-11-15 11:44:32.388044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.979 qpair failed and we were unable to recover it. 00:25:51.979 [2024-11-15 11:44:32.397931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.979 [2024-11-15 11:44:32.398025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.979 [2024-11-15 11:44:32.398057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.979 [2024-11-15 11:44:32.398080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.979 [2024-11-15 11:44:32.398099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:51.979 [2024-11-15 11:44:32.398135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:51.979 qpair failed and we were unable to recover it. 00:25:52.239 [2024-11-15 11:44:32.407935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.239 [2024-11-15 11:44:32.408028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.239 [2024-11-15 11:44:32.408056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.239 [2024-11-15 11:44:32.408070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.239 [2024-11-15 11:44:32.408083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.239 [2024-11-15 11:44:32.408113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.239 qpair failed and we were unable to recover it. 00:25:52.239 [2024-11-15 11:44:32.418051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.239 [2024-11-15 11:44:32.418192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.239 [2024-11-15 11:44:32.418219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.239 [2024-11-15 11:44:32.418233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.239 [2024-11-15 11:44:32.418246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.239 [2024-11-15 11:44:32.418277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.239 qpair failed and we were unable to recover it. 00:25:52.239 [2024-11-15 11:44:32.428016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.239 [2024-11-15 11:44:32.428097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.239 [2024-11-15 11:44:32.428121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.239 [2024-11-15 11:44:32.428134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.239 [2024-11-15 11:44:32.428146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.239 [2024-11-15 11:44:32.428176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.239 qpair failed and we were unable to recover it. 00:25:52.239 [2024-11-15 11:44:32.438007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.239 [2024-11-15 11:44:32.438109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.239 [2024-11-15 11:44:32.438135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.239 [2024-11-15 11:44:32.438149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.239 [2024-11-15 11:44:32.438162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.239 [2024-11-15 11:44:32.438191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.239 qpair failed and we were unable to recover it. 00:25:52.239 [2024-11-15 11:44:32.448077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.239 [2024-11-15 11:44:32.448170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.239 [2024-11-15 11:44:32.448196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.239 [2024-11-15 11:44:32.448210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.239 [2024-11-15 11:44:32.448223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.239 [2024-11-15 11:44:32.448252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.239 qpair failed and we were unable to recover it. 00:25:52.239 [2024-11-15 11:44:32.458102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.239 [2024-11-15 11:44:32.458215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.239 [2024-11-15 11:44:32.458241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.239 [2024-11-15 11:44:32.458263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.239 [2024-11-15 11:44:32.458279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.239 [2024-11-15 11:44:32.458318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.239 qpair failed and we were unable to recover it. 00:25:52.239 [2024-11-15 11:44:32.468096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.239 [2024-11-15 11:44:32.468179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.239 [2024-11-15 11:44:32.468205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.239 [2024-11-15 11:44:32.468220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.239 [2024-11-15 11:44:32.468233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.239 [2024-11-15 11:44:32.468262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.239 qpair failed and we were unable to recover it. 00:25:52.239 [2024-11-15 11:44:32.478126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.239 [2024-11-15 11:44:32.478209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.239 [2024-11-15 11:44:32.478235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.239 [2024-11-15 11:44:32.478249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.239 [2024-11-15 11:44:32.478262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.239 [2024-11-15 11:44:32.478292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.239 qpair failed and we were unable to recover it. 00:25:52.239 [2024-11-15 11:44:32.488176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.239 [2024-11-15 11:44:32.488275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.239 [2024-11-15 11:44:32.488300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.239 [2024-11-15 11:44:32.488327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.239 [2024-11-15 11:44:32.488341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.239 [2024-11-15 11:44:32.488371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.239 qpair failed and we were unable to recover it. 00:25:52.239 [2024-11-15 11:44:32.498264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.239 [2024-11-15 11:44:32.498369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.239 [2024-11-15 11:44:32.498395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.239 [2024-11-15 11:44:32.498409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.239 [2024-11-15 11:44:32.498422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.239 [2024-11-15 11:44:32.498460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.239 qpair failed and we were unable to recover it. 00:25:52.239 [2024-11-15 11:44:32.508221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.239 [2024-11-15 11:44:32.508317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.239 [2024-11-15 11:44:32.508344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.239 [2024-11-15 11:44:32.508358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.239 [2024-11-15 11:44:32.508371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.239 [2024-11-15 11:44:32.508402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.239 qpair failed and we were unable to recover it. 00:25:52.240 [2024-11-15 11:44:32.518247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.240 [2024-11-15 11:44:32.518371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.240 [2024-11-15 11:44:32.518397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.240 [2024-11-15 11:44:32.518412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.240 [2024-11-15 11:44:32.518425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.240 [2024-11-15 11:44:32.518457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.240 qpair failed and we were unable to recover it. 00:25:52.240 [2024-11-15 11:44:32.528323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.240 [2024-11-15 11:44:32.528418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.240 [2024-11-15 11:44:32.528443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.240 [2024-11-15 11:44:32.528457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.240 [2024-11-15 11:44:32.528471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.240 [2024-11-15 11:44:32.528500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.240 qpair failed and we were unable to recover it. 00:25:52.240 [2024-11-15 11:44:32.538275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.240 [2024-11-15 11:44:32.538416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.240 [2024-11-15 11:44:32.538442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.240 [2024-11-15 11:44:32.538457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.240 [2024-11-15 11:44:32.538470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.240 [2024-11-15 11:44:32.538500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.240 qpair failed and we were unable to recover it. 00:25:52.240 [2024-11-15 11:44:32.548340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.240 [2024-11-15 11:44:32.548463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.240 [2024-11-15 11:44:32.548488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.240 [2024-11-15 11:44:32.548503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.240 [2024-11-15 11:44:32.548516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.240 [2024-11-15 11:44:32.548546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.240 qpair failed and we were unable to recover it. 00:25:52.240 [2024-11-15 11:44:32.558446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.240 [2024-11-15 11:44:32.558591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.240 [2024-11-15 11:44:32.558617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.240 [2024-11-15 11:44:32.558631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.240 [2024-11-15 11:44:32.558644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.240 [2024-11-15 11:44:32.558674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.240 qpair failed and we were unable to recover it. 00:25:52.240 [2024-11-15 11:44:32.568411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.240 [2024-11-15 11:44:32.568511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.240 [2024-11-15 11:44:32.568536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.240 [2024-11-15 11:44:32.568550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.240 [2024-11-15 11:44:32.568563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.240 [2024-11-15 11:44:32.568594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.240 qpair failed and we were unable to recover it. 00:25:52.240 [2024-11-15 11:44:32.578484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.240 [2024-11-15 11:44:32.578619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.240 [2024-11-15 11:44:32.578645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.240 [2024-11-15 11:44:32.578659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.240 [2024-11-15 11:44:32.578672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.240 [2024-11-15 11:44:32.578701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.240 qpair failed and we were unable to recover it. 00:25:52.240 [2024-11-15 11:44:32.588433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.240 [2024-11-15 11:44:32.588518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.240 [2024-11-15 11:44:32.588549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.240 [2024-11-15 11:44:32.588564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.240 [2024-11-15 11:44:32.588577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.240 [2024-11-15 11:44:32.588619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.240 qpair failed and we were unable to recover it. 00:25:52.240 [2024-11-15 11:44:32.598441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.240 [2024-11-15 11:44:32.598525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.240 [2024-11-15 11:44:32.598551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.240 [2024-11-15 11:44:32.598566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.240 [2024-11-15 11:44:32.598579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.240 [2024-11-15 11:44:32.598608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.240 qpair failed and we were unable to recover it. 00:25:52.240 [2024-11-15 11:44:32.608552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.240 [2024-11-15 11:44:32.608689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.240 [2024-11-15 11:44:32.608715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.240 [2024-11-15 11:44:32.608729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.240 [2024-11-15 11:44:32.608742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.240 [2024-11-15 11:44:32.608771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.240 qpair failed and we were unable to recover it. 00:25:52.240 [2024-11-15 11:44:32.618528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.240 [2024-11-15 11:44:32.618622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.240 [2024-11-15 11:44:32.618647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.240 [2024-11-15 11:44:32.618662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.240 [2024-11-15 11:44:32.618674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.240 [2024-11-15 11:44:32.618704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.240 qpair failed and we were unable to recover it. 00:25:52.240 [2024-11-15 11:44:32.628580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.240 [2024-11-15 11:44:32.628711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.240 [2024-11-15 11:44:32.628736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.240 [2024-11-15 11:44:32.628751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.240 [2024-11-15 11:44:32.628770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.240 [2024-11-15 11:44:32.628800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.240 qpair failed and we were unable to recover it. 00:25:52.240 [2024-11-15 11:44:32.638607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.240 [2024-11-15 11:44:32.638689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.240 [2024-11-15 11:44:32.638714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.240 [2024-11-15 11:44:32.638728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.240 [2024-11-15 11:44:32.638741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.240 [2024-11-15 11:44:32.638770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.240 qpair failed and we were unable to recover it. 00:25:52.240 [2024-11-15 11:44:32.648633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.241 [2024-11-15 11:44:32.648719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.241 [2024-11-15 11:44:32.648744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.241 [2024-11-15 11:44:32.648758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.241 [2024-11-15 11:44:32.648771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.241 [2024-11-15 11:44:32.648802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.241 qpair failed and we were unable to recover it. 00:25:52.241 [2024-11-15 11:44:32.658662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.241 [2024-11-15 11:44:32.658749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.241 [2024-11-15 11:44:32.658776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.241 [2024-11-15 11:44:32.658791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.241 [2024-11-15 11:44:32.658806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.241 [2024-11-15 11:44:32.658838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.241 qpair failed and we were unable to recover it. 00:25:52.500 [2024-11-15 11:44:32.668790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.500 [2024-11-15 11:44:32.668875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.500 [2024-11-15 11:44:32.668902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.500 [2024-11-15 11:44:32.668917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.500 [2024-11-15 11:44:32.668930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.500 [2024-11-15 11:44:32.668959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.500 qpair failed and we were unable to recover it. 00:25:52.500 [2024-11-15 11:44:32.678667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.500 [2024-11-15 11:44:32.678798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.500 [2024-11-15 11:44:32.678824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.500 [2024-11-15 11:44:32.678838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.500 [2024-11-15 11:44:32.678851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.500 [2024-11-15 11:44:32.678883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.500 qpair failed and we were unable to recover it. 00:25:52.500 [2024-11-15 11:44:32.688752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.500 [2024-11-15 11:44:32.688845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.500 [2024-11-15 11:44:32.688870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.500 [2024-11-15 11:44:32.688884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.500 [2024-11-15 11:44:32.688897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.500 [2024-11-15 11:44:32.688927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.500 qpair failed and we were unable to recover it. 00:25:52.500 [2024-11-15 11:44:32.698777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.500 [2024-11-15 11:44:32.698864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.500 [2024-11-15 11:44:32.698889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.500 [2024-11-15 11:44:32.698903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.500 [2024-11-15 11:44:32.698915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.500 [2024-11-15 11:44:32.698945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.500 qpair failed and we were unable to recover it. 00:25:52.500 [2024-11-15 11:44:32.708822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.500 [2024-11-15 11:44:32.708916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.500 [2024-11-15 11:44:32.708945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.500 [2024-11-15 11:44:32.708959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.501 [2024-11-15 11:44:32.708972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.501 [2024-11-15 11:44:32.709002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.501 qpair failed and we were unable to recover it. 00:25:52.501 [2024-11-15 11:44:32.718827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.501 [2024-11-15 11:44:32.718911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.501 [2024-11-15 11:44:32.718943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.501 [2024-11-15 11:44:32.718958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.501 [2024-11-15 11:44:32.718971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.501 [2024-11-15 11:44:32.719002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.501 qpair failed and we were unable to recover it. 00:25:52.501 [2024-11-15 11:44:32.728828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.501 [2024-11-15 11:44:32.728948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.501 [2024-11-15 11:44:32.728974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.501 [2024-11-15 11:44:32.728988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.501 [2024-11-15 11:44:32.729001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.501 [2024-11-15 11:44:32.729043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.501 qpair failed and we were unable to recover it. 00:25:52.501 [2024-11-15 11:44:32.738847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.501 [2024-11-15 11:44:32.738934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.501 [2024-11-15 11:44:32.738960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.501 [2024-11-15 11:44:32.738974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.501 [2024-11-15 11:44:32.738987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.501 [2024-11-15 11:44:32.739016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.501 qpair failed and we were unable to recover it. 00:25:52.501 [2024-11-15 11:44:32.748912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.501 [2024-11-15 11:44:32.748997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.501 [2024-11-15 11:44:32.749023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.501 [2024-11-15 11:44:32.749037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.501 [2024-11-15 11:44:32.749049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.501 [2024-11-15 11:44:32.749080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.501 qpair failed and we were unable to recover it. 00:25:52.501 [2024-11-15 11:44:32.758949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.501 [2024-11-15 11:44:32.759082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.501 [2024-11-15 11:44:32.759110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.501 [2024-11-15 11:44:32.759124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.501 [2024-11-15 11:44:32.759143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.501 [2024-11-15 11:44:32.759174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.501 qpair failed and we were unable to recover it. 00:25:52.501 [2024-11-15 11:44:32.768972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.501 [2024-11-15 11:44:32.769069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.501 [2024-11-15 11:44:32.769098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.501 [2024-11-15 11:44:32.769115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.501 [2024-11-15 11:44:32.769128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.501 [2024-11-15 11:44:32.769159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.501 qpair failed and we were unable to recover it. 00:25:52.501 [2024-11-15 11:44:32.779001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.501 [2024-11-15 11:44:32.779089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.501 [2024-11-15 11:44:32.779115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.501 [2024-11-15 11:44:32.779129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.501 [2024-11-15 11:44:32.779142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.501 [2024-11-15 11:44:32.779173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.501 qpair failed and we were unable to recover it. 00:25:52.501 [2024-11-15 11:44:32.789016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.501 [2024-11-15 11:44:32.789105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.501 [2024-11-15 11:44:32.789130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.501 [2024-11-15 11:44:32.789145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.501 [2024-11-15 11:44:32.789158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.501 [2024-11-15 11:44:32.789187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.501 qpair failed and we were unable to recover it. 00:25:52.501 [2024-11-15 11:44:32.799030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.501 [2024-11-15 11:44:32.799114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.501 [2024-11-15 11:44:32.799140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.501 [2024-11-15 11:44:32.799154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.501 [2024-11-15 11:44:32.799168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.501 [2024-11-15 11:44:32.799197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.501 qpair failed and we were unable to recover it. 00:25:52.501 [2024-11-15 11:44:32.809067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.501 [2024-11-15 11:44:32.809159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.501 [2024-11-15 11:44:32.809184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.501 [2024-11-15 11:44:32.809198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.501 [2024-11-15 11:44:32.809212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.501 [2024-11-15 11:44:32.809241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.501 qpair failed and we were unable to recover it. 00:25:52.501 [2024-11-15 11:44:32.819197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.501 [2024-11-15 11:44:32.819317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.501 [2024-11-15 11:44:32.819343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.501 [2024-11-15 11:44:32.819357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.501 [2024-11-15 11:44:32.819370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.501 [2024-11-15 11:44:32.819401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.501 qpair failed and we were unable to recover it. 00:25:52.501 [2024-11-15 11:44:32.829124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.501 [2024-11-15 11:44:32.829210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.501 [2024-11-15 11:44:32.829236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.501 [2024-11-15 11:44:32.829250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.501 [2024-11-15 11:44:32.829263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.501 [2024-11-15 11:44:32.829292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.501 qpair failed and we were unable to recover it. 00:25:52.501 [2024-11-15 11:44:32.839171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.501 [2024-11-15 11:44:32.839258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.501 [2024-11-15 11:44:32.839284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.501 [2024-11-15 11:44:32.839298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.501 [2024-11-15 11:44:32.839318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.502 [2024-11-15 11:44:32.839349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.502 qpair failed and we were unable to recover it. 00:25:52.502 [2024-11-15 11:44:32.849269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.502 [2024-11-15 11:44:32.849368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.502 [2024-11-15 11:44:32.849400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.502 [2024-11-15 11:44:32.849414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.502 [2024-11-15 11:44:32.849427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.502 [2024-11-15 11:44:32.849457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.502 qpair failed and we were unable to recover it. 00:25:52.502 [2024-11-15 11:44:32.859197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.502 [2024-11-15 11:44:32.859320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.502 [2024-11-15 11:44:32.859345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.502 [2024-11-15 11:44:32.859359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.502 [2024-11-15 11:44:32.859372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.502 [2024-11-15 11:44:32.859403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.502 qpair failed and we were unable to recover it. 00:25:52.502 [2024-11-15 11:44:32.869218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.502 [2024-11-15 11:44:32.869308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.502 [2024-11-15 11:44:32.869334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.502 [2024-11-15 11:44:32.869349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.502 [2024-11-15 11:44:32.869361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.502 [2024-11-15 11:44:32.869392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.502 qpair failed and we were unable to recover it. 00:25:52.502 [2024-11-15 11:44:32.879340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.502 [2024-11-15 11:44:32.879469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.502 [2024-11-15 11:44:32.879494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.502 [2024-11-15 11:44:32.879508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.502 [2024-11-15 11:44:32.879522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.502 [2024-11-15 11:44:32.879552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.502 qpair failed and we were unable to recover it. 00:25:52.502 [2024-11-15 11:44:32.889327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.502 [2024-11-15 11:44:32.889431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.502 [2024-11-15 11:44:32.889457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.502 [2024-11-15 11:44:32.889478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.502 [2024-11-15 11:44:32.889491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.502 [2024-11-15 11:44:32.889522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.502 qpair failed and we were unable to recover it. 00:25:52.502 [2024-11-15 11:44:32.899330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.502 [2024-11-15 11:44:32.899425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.502 [2024-11-15 11:44:32.899451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.502 [2024-11-15 11:44:32.899465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.502 [2024-11-15 11:44:32.899477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.502 [2024-11-15 11:44:32.899508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.502 qpair failed and we were unable to recover it. 00:25:52.502 [2024-11-15 11:44:32.909364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.502 [2024-11-15 11:44:32.909453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.502 [2024-11-15 11:44:32.909479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.502 [2024-11-15 11:44:32.909493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.502 [2024-11-15 11:44:32.909506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.502 [2024-11-15 11:44:32.909537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.502 qpair failed and we were unable to recover it. 00:25:52.502 [2024-11-15 11:44:32.919381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.502 [2024-11-15 11:44:32.919472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.502 [2024-11-15 11:44:32.919499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.502 [2024-11-15 11:44:32.919513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.502 [2024-11-15 11:44:32.919526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.502 [2024-11-15 11:44:32.919557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.502 qpair failed and we were unable to recover it. 00:25:52.762 [2024-11-15 11:44:32.929444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.762 [2024-11-15 11:44:32.929535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.762 [2024-11-15 11:44:32.929562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.762 [2024-11-15 11:44:32.929577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.762 [2024-11-15 11:44:32.929591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.762 [2024-11-15 11:44:32.929628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.762 qpair failed and we were unable to recover it. 00:25:52.762 [2024-11-15 11:44:32.939429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.762 [2024-11-15 11:44:32.939518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.762 [2024-11-15 11:44:32.939544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.762 [2024-11-15 11:44:32.939560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.762 [2024-11-15 11:44:32.939574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.762 [2024-11-15 11:44:32.939603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.762 qpair failed and we were unable to recover it. 00:25:52.762 [2024-11-15 11:44:32.949514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.762 [2024-11-15 11:44:32.949614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.762 [2024-11-15 11:44:32.949643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.762 [2024-11-15 11:44:32.949658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.762 [2024-11-15 11:44:32.949671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.762 [2024-11-15 11:44:32.949702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.762 qpair failed and we were unable to recover it. 00:25:52.763 [2024-11-15 11:44:32.959578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.763 [2024-11-15 11:44:32.959674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.763 [2024-11-15 11:44:32.959700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.763 [2024-11-15 11:44:32.959714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.763 [2024-11-15 11:44:32.959727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.763 [2024-11-15 11:44:32.959755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.763 qpair failed and we were unable to recover it. 00:25:52.763 [2024-11-15 11:44:32.969613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.763 [2024-11-15 11:44:32.969711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.763 [2024-11-15 11:44:32.969737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.763 [2024-11-15 11:44:32.969751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.763 [2024-11-15 11:44:32.969764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.763 [2024-11-15 11:44:32.969794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.763 qpair failed and we were unable to recover it. 00:25:52.763 [2024-11-15 11:44:32.979569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.763 [2024-11-15 11:44:32.979660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.763 [2024-11-15 11:44:32.979686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.763 [2024-11-15 11:44:32.979700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.763 [2024-11-15 11:44:32.979714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.763 [2024-11-15 11:44:32.979744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.763 qpair failed and we were unable to recover it. 00:25:52.763 [2024-11-15 11:44:32.989636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.763 [2024-11-15 11:44:32.989747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.763 [2024-11-15 11:44:32.989773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.763 [2024-11-15 11:44:32.989787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.763 [2024-11-15 11:44:32.989800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.763 [2024-11-15 11:44:32.989830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.763 qpair failed and we were unable to recover it. 00:25:52.763 [2024-11-15 11:44:32.999607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.763 [2024-11-15 11:44:32.999698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.763 [2024-11-15 11:44:32.999723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.763 [2024-11-15 11:44:32.999738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.763 [2024-11-15 11:44:32.999751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.763 [2024-11-15 11:44:32.999782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.763 qpair failed and we were unable to recover it. 00:25:52.763 [2024-11-15 11:44:33.009656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.763 [2024-11-15 11:44:33.009746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.763 [2024-11-15 11:44:33.009771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.763 [2024-11-15 11:44:33.009785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.763 [2024-11-15 11:44:33.009799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.763 [2024-11-15 11:44:33.009828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.763 qpair failed and we were unable to recover it. 00:25:52.763 [2024-11-15 11:44:33.019663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.763 [2024-11-15 11:44:33.019786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.763 [2024-11-15 11:44:33.019812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.763 [2024-11-15 11:44:33.019836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.763 [2024-11-15 11:44:33.019850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.763 [2024-11-15 11:44:33.019882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.763 qpair failed and we were unable to recover it. 00:25:52.763 [2024-11-15 11:44:33.029700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.763 [2024-11-15 11:44:33.029785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.763 [2024-11-15 11:44:33.029810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.763 [2024-11-15 11:44:33.029825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.763 [2024-11-15 11:44:33.029838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.763 [2024-11-15 11:44:33.029868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.763 qpair failed and we were unable to recover it. 00:25:52.763 [2024-11-15 11:44:33.039702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.763 [2024-11-15 11:44:33.039793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.763 [2024-11-15 11:44:33.039819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.763 [2024-11-15 11:44:33.039833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.763 [2024-11-15 11:44:33.039846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.763 [2024-11-15 11:44:33.039875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.763 qpair failed and we were unable to recover it. 00:25:52.763 [2024-11-15 11:44:33.049781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.763 [2024-11-15 11:44:33.049885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.763 [2024-11-15 11:44:33.049911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.763 [2024-11-15 11:44:33.049926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.763 [2024-11-15 11:44:33.049939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.763 [2024-11-15 11:44:33.049970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.763 qpair failed and we were unable to recover it. 00:25:52.763 [2024-11-15 11:44:33.059801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.763 [2024-11-15 11:44:33.059881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.763 [2024-11-15 11:44:33.059906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.763 [2024-11-15 11:44:33.059921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.763 [2024-11-15 11:44:33.059933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.763 [2024-11-15 11:44:33.059969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.763 qpair failed and we were unable to recover it. 00:25:52.763 [2024-11-15 11:44:33.069838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.763 [2024-11-15 11:44:33.069946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.763 [2024-11-15 11:44:33.069971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.763 [2024-11-15 11:44:33.069985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.763 [2024-11-15 11:44:33.069999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.763 [2024-11-15 11:44:33.070030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.763 qpair failed and we were unable to recover it. 00:25:52.763 [2024-11-15 11:44:33.079800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.763 [2024-11-15 11:44:33.079901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.763 [2024-11-15 11:44:33.079926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.763 [2024-11-15 11:44:33.079940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.763 [2024-11-15 11:44:33.079952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.763 [2024-11-15 11:44:33.079983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.763 qpair failed and we were unable to recover it. 00:25:52.763 [2024-11-15 11:44:33.089971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.764 [2024-11-15 11:44:33.090092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.764 [2024-11-15 11:44:33.090118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.764 [2024-11-15 11:44:33.090132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.764 [2024-11-15 11:44:33.090144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.764 [2024-11-15 11:44:33.090174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.764 qpair failed and we were unable to recover it. 00:25:52.764 [2024-11-15 11:44:33.099956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.764 [2024-11-15 11:44:33.100037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.764 [2024-11-15 11:44:33.100063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.764 [2024-11-15 11:44:33.100077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.764 [2024-11-15 11:44:33.100090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.764 [2024-11-15 11:44:33.100121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.764 qpair failed and we were unable to recover it. 00:25:52.764 [2024-11-15 11:44:33.109923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.764 [2024-11-15 11:44:33.110009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.764 [2024-11-15 11:44:33.110035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.764 [2024-11-15 11:44:33.110049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.764 [2024-11-15 11:44:33.110062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.764 [2024-11-15 11:44:33.110092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.764 qpair failed and we were unable to recover it. 00:25:52.764 [2024-11-15 11:44:33.119954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.764 [2024-11-15 11:44:33.120046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.764 [2024-11-15 11:44:33.120075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.764 [2024-11-15 11:44:33.120091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.764 [2024-11-15 11:44:33.120104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.764 [2024-11-15 11:44:33.120135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.764 qpair failed and we were unable to recover it. 00:25:52.764 [2024-11-15 11:44:33.130012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.764 [2024-11-15 11:44:33.130133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.764 [2024-11-15 11:44:33.130159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.764 [2024-11-15 11:44:33.130173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.764 [2024-11-15 11:44:33.130187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.764 [2024-11-15 11:44:33.130217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.764 qpair failed and we were unable to recover it. 00:25:52.764 [2024-11-15 11:44:33.140006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.764 [2024-11-15 11:44:33.140093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.764 [2024-11-15 11:44:33.140118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.764 [2024-11-15 11:44:33.140132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.764 [2024-11-15 11:44:33.140144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.764 [2024-11-15 11:44:33.140175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.764 qpair failed and we were unable to recover it. 00:25:52.764 [2024-11-15 11:44:33.150019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.764 [2024-11-15 11:44:33.150138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.764 [2024-11-15 11:44:33.150171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.764 [2024-11-15 11:44:33.150187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.764 [2024-11-15 11:44:33.150200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.764 [2024-11-15 11:44:33.150230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.764 qpair failed and we were unable to recover it. 00:25:52.764 [2024-11-15 11:44:33.160048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.764 [2024-11-15 11:44:33.160126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.764 [2024-11-15 11:44:33.160151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.764 [2024-11-15 11:44:33.160165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.764 [2024-11-15 11:44:33.160178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.764 [2024-11-15 11:44:33.160220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.764 qpair failed and we were unable to recover it. 00:25:52.764 [2024-11-15 11:44:33.170090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.764 [2024-11-15 11:44:33.170178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.764 [2024-11-15 11:44:33.170204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.764 [2024-11-15 11:44:33.170218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.764 [2024-11-15 11:44:33.170231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.764 [2024-11-15 11:44:33.170261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.764 qpair failed and we were unable to recover it. 00:25:52.764 [2024-11-15 11:44:33.180131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.764 [2024-11-15 11:44:33.180225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.764 [2024-11-15 11:44:33.180251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.764 [2024-11-15 11:44:33.180265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.764 [2024-11-15 11:44:33.180277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:52.764 [2024-11-15 11:44:33.180313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:52.764 qpair failed and we were unable to recover it. 00:25:53.024 [2024-11-15 11:44:33.190122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.024 [2024-11-15 11:44:33.190214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.024 [2024-11-15 11:44:33.190250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.024 [2024-11-15 11:44:33.190270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.024 [2024-11-15 11:44:33.190289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.024 [2024-11-15 11:44:33.190330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.024 qpair failed and we were unable to recover it. 00:25:53.024 [2024-11-15 11:44:33.200174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.024 [2024-11-15 11:44:33.200258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.024 [2024-11-15 11:44:33.200285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.024 [2024-11-15 11:44:33.200299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.024 [2024-11-15 11:44:33.200321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.024 [2024-11-15 11:44:33.200351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.024 qpair failed and we were unable to recover it. 00:25:53.024 [2024-11-15 11:44:33.210221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.024 [2024-11-15 11:44:33.210323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.024 [2024-11-15 11:44:33.210350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.024 [2024-11-15 11:44:33.210365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.024 [2024-11-15 11:44:33.210378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.024 [2024-11-15 11:44:33.210409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.024 qpair failed and we were unable to recover it. 00:25:53.024 [2024-11-15 11:44:33.220272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.024 [2024-11-15 11:44:33.220373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.024 [2024-11-15 11:44:33.220399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.024 [2024-11-15 11:44:33.220413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.024 [2024-11-15 11:44:33.220426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.024 [2024-11-15 11:44:33.220456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.024 qpair failed and we were unable to recover it. 00:25:53.024 [2024-11-15 11:44:33.230270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.024 [2024-11-15 11:44:33.230366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.024 [2024-11-15 11:44:33.230395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.024 [2024-11-15 11:44:33.230411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.024 [2024-11-15 11:44:33.230424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.024 [2024-11-15 11:44:33.230455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.024 qpair failed and we were unable to recover it. 00:25:53.024 [2024-11-15 11:44:33.240324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.024 [2024-11-15 11:44:33.240427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.024 [2024-11-15 11:44:33.240454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.024 [2024-11-15 11:44:33.240469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.024 [2024-11-15 11:44:33.240482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.024 [2024-11-15 11:44:33.240512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.024 qpair failed and we were unable to recover it. 00:25:53.024 [2024-11-15 11:44:33.250335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.024 [2024-11-15 11:44:33.250427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.024 [2024-11-15 11:44:33.250453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.024 [2024-11-15 11:44:33.250468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.024 [2024-11-15 11:44:33.250481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.024 [2024-11-15 11:44:33.250512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.024 qpair failed and we were unable to recover it. 00:25:53.025 [2024-11-15 11:44:33.260343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.025 [2024-11-15 11:44:33.260438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.025 [2024-11-15 11:44:33.260464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.025 [2024-11-15 11:44:33.260479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.025 [2024-11-15 11:44:33.260491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.025 [2024-11-15 11:44:33.260522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.025 qpair failed and we were unable to recover it. 00:25:53.025 [2024-11-15 11:44:33.270399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.025 [2024-11-15 11:44:33.270505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.025 [2024-11-15 11:44:33.270530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.025 [2024-11-15 11:44:33.270545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.025 [2024-11-15 11:44:33.270558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.025 [2024-11-15 11:44:33.270588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.025 qpair failed and we were unable to recover it. 00:25:53.025 [2024-11-15 11:44:33.280415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.025 [2024-11-15 11:44:33.280497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.025 [2024-11-15 11:44:33.280528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.025 [2024-11-15 11:44:33.280543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.025 [2024-11-15 11:44:33.280556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.025 [2024-11-15 11:44:33.280586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.025 qpair failed and we were unable to recover it. 00:25:53.025 [2024-11-15 11:44:33.290463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.025 [2024-11-15 11:44:33.290555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.025 [2024-11-15 11:44:33.290580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.025 [2024-11-15 11:44:33.290594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.025 [2024-11-15 11:44:33.290607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.025 [2024-11-15 11:44:33.290638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.025 qpair failed and we were unable to recover it. 00:25:53.025 [2024-11-15 11:44:33.300449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.025 [2024-11-15 11:44:33.300527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.025 [2024-11-15 11:44:33.300552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.025 [2024-11-15 11:44:33.300566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.025 [2024-11-15 11:44:33.300580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.025 [2024-11-15 11:44:33.300609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.025 qpair failed and we were unable to recover it. 00:25:53.025 [2024-11-15 11:44:33.310483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.025 [2024-11-15 11:44:33.310564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.025 [2024-11-15 11:44:33.310590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.025 [2024-11-15 11:44:33.310604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.025 [2024-11-15 11:44:33.310617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.025 [2024-11-15 11:44:33.310647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.025 qpair failed and we were unable to recover it. 00:25:53.025 [2024-11-15 11:44:33.320594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.025 [2024-11-15 11:44:33.320675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.025 [2024-11-15 11:44:33.320701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.025 [2024-11-15 11:44:33.320715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.025 [2024-11-15 11:44:33.320733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.025 [2024-11-15 11:44:33.320765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.025 qpair failed and we were unable to recover it. 00:25:53.025 [2024-11-15 11:44:33.330564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.025 [2024-11-15 11:44:33.330655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.025 [2024-11-15 11:44:33.330680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.025 [2024-11-15 11:44:33.330694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.025 [2024-11-15 11:44:33.330707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.025 [2024-11-15 11:44:33.330737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.025 qpair failed and we were unable to recover it. 00:25:53.025 [2024-11-15 11:44:33.340660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.025 [2024-11-15 11:44:33.340746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.025 [2024-11-15 11:44:33.340771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.025 [2024-11-15 11:44:33.340785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.025 [2024-11-15 11:44:33.340798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.025 [2024-11-15 11:44:33.340829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.025 qpair failed and we were unable to recover it. 00:25:53.025 [2024-11-15 11:44:33.350694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.025 [2024-11-15 11:44:33.350782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.025 [2024-11-15 11:44:33.350808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.025 [2024-11-15 11:44:33.350822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.025 [2024-11-15 11:44:33.350835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.025 [2024-11-15 11:44:33.350864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.025 qpair failed and we were unable to recover it. 00:25:53.025 [2024-11-15 11:44:33.360625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.025 [2024-11-15 11:44:33.360720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.025 [2024-11-15 11:44:33.360745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.025 [2024-11-15 11:44:33.360760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.025 [2024-11-15 11:44:33.360772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.025 [2024-11-15 11:44:33.360804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.025 qpair failed and we were unable to recover it. 00:25:53.025 [2024-11-15 11:44:33.370705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.025 [2024-11-15 11:44:33.370814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.025 [2024-11-15 11:44:33.370842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.025 [2024-11-15 11:44:33.370859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.025 [2024-11-15 11:44:33.370873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.025 [2024-11-15 11:44:33.370904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.025 qpair failed and we were unable to recover it. 00:25:53.025 [2024-11-15 11:44:33.380681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.025 [2024-11-15 11:44:33.380768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.025 [2024-11-15 11:44:33.380793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.025 [2024-11-15 11:44:33.380808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.025 [2024-11-15 11:44:33.380821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.025 [2024-11-15 11:44:33.380852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.025 qpair failed and we were unable to recover it. 00:25:53.025 [2024-11-15 11:44:33.390740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.026 [2024-11-15 11:44:33.390819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.026 [2024-11-15 11:44:33.390844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.026 [2024-11-15 11:44:33.390858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.026 [2024-11-15 11:44:33.390872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.026 [2024-11-15 11:44:33.390902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.026 qpair failed and we were unable to recover it. 00:25:53.026 [2024-11-15 11:44:33.400866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.026 [2024-11-15 11:44:33.400945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.026 [2024-11-15 11:44:33.400970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.026 [2024-11-15 11:44:33.400984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.026 [2024-11-15 11:44:33.400997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.026 [2024-11-15 11:44:33.401028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.026 qpair failed and we were unable to recover it. 00:25:53.026 [2024-11-15 11:44:33.410821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.026 [2024-11-15 11:44:33.410921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.026 [2024-11-15 11:44:33.410947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.026 [2024-11-15 11:44:33.410961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.026 [2024-11-15 11:44:33.410974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.026 [2024-11-15 11:44:33.411004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.026 qpair failed and we were unable to recover it. 00:25:53.026 [2024-11-15 11:44:33.420801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.026 [2024-11-15 11:44:33.420886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.026 [2024-11-15 11:44:33.420912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.026 [2024-11-15 11:44:33.420926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.026 [2024-11-15 11:44:33.420939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.026 [2024-11-15 11:44:33.420969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.026 qpair failed and we were unable to recover it. 00:25:53.026 [2024-11-15 11:44:33.430862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.026 [2024-11-15 11:44:33.430949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.026 [2024-11-15 11:44:33.430974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.026 [2024-11-15 11:44:33.430988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.026 [2024-11-15 11:44:33.431000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.026 [2024-11-15 11:44:33.431029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.026 qpair failed and we were unable to recover it. 00:25:53.026 [2024-11-15 11:44:33.440892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.026 [2024-11-15 11:44:33.440976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.026 [2024-11-15 11:44:33.441003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.026 [2024-11-15 11:44:33.441017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.026 [2024-11-15 11:44:33.441030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.026 [2024-11-15 11:44:33.441059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.026 qpair failed and we were unable to recover it. 00:25:53.285 [2024-11-15 11:44:33.451027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.285 [2024-11-15 11:44:33.451122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.285 [2024-11-15 11:44:33.451151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.285 [2024-11-15 11:44:33.451173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.285 [2024-11-15 11:44:33.451190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.285 [2024-11-15 11:44:33.451222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.285 qpair failed and we were unable to recover it. 00:25:53.285 [2024-11-15 11:44:33.460957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.285 [2024-11-15 11:44:33.461070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.285 [2024-11-15 11:44:33.461097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.285 [2024-11-15 11:44:33.461112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.285 [2024-11-15 11:44:33.461124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.285 [2024-11-15 11:44:33.461155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.285 qpair failed and we were unable to recover it. 00:25:53.285 [2024-11-15 11:44:33.471015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.285 [2024-11-15 11:44:33.471098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.285 [2024-11-15 11:44:33.471124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.285 [2024-11-15 11:44:33.471139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.285 [2024-11-15 11:44:33.471152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.285 [2024-11-15 11:44:33.471183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.285 qpair failed and we were unable to recover it. 00:25:53.285 [2024-11-15 11:44:33.480978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.285 [2024-11-15 11:44:33.481063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.285 [2024-11-15 11:44:33.481089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.285 [2024-11-15 11:44:33.481103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.285 [2024-11-15 11:44:33.481116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.285 [2024-11-15 11:44:33.481158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.285 qpair failed and we were unable to recover it. 00:25:53.285 [2024-11-15 11:44:33.491115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.285 [2024-11-15 11:44:33.491207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.285 [2024-11-15 11:44:33.491233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.285 [2024-11-15 11:44:33.491249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.286 [2024-11-15 11:44:33.491262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.286 [2024-11-15 11:44:33.491298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.286 qpair failed and we were unable to recover it. 00:25:53.286 [2024-11-15 11:44:33.501028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.286 [2024-11-15 11:44:33.501115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.286 [2024-11-15 11:44:33.501142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.286 [2024-11-15 11:44:33.501156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.286 [2024-11-15 11:44:33.501169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.286 [2024-11-15 11:44:33.501199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.286 qpair failed and we were unable to recover it. 00:25:53.286 [2024-11-15 11:44:33.511072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.286 [2024-11-15 11:44:33.511181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.286 [2024-11-15 11:44:33.511206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.286 [2024-11-15 11:44:33.511220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.286 [2024-11-15 11:44:33.511233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.286 [2024-11-15 11:44:33.511263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.286 qpair failed and we were unable to recover it. 00:25:53.286 [2024-11-15 11:44:33.521112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.286 [2024-11-15 11:44:33.521187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.286 [2024-11-15 11:44:33.521214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.286 [2024-11-15 11:44:33.521228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.286 [2024-11-15 11:44:33.521241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.286 [2024-11-15 11:44:33.521271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.286 qpair failed and we were unable to recover it. 00:25:53.286 [2024-11-15 11:44:33.531180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.286 [2024-11-15 11:44:33.531295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.286 [2024-11-15 11:44:33.531328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.286 [2024-11-15 11:44:33.531343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.286 [2024-11-15 11:44:33.531355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.286 [2024-11-15 11:44:33.531387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.286 qpair failed and we were unable to recover it. 00:25:53.286 [2024-11-15 11:44:33.541155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.286 [2024-11-15 11:44:33.541248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.286 [2024-11-15 11:44:33.541274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.286 [2024-11-15 11:44:33.541288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.286 [2024-11-15 11:44:33.541307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.286 [2024-11-15 11:44:33.541338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.286 qpair failed and we were unable to recover it. 00:25:53.286 [2024-11-15 11:44:33.551192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.286 [2024-11-15 11:44:33.551320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.286 [2024-11-15 11:44:33.551346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.286 [2024-11-15 11:44:33.551361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.286 [2024-11-15 11:44:33.551374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.286 [2024-11-15 11:44:33.551404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.286 qpair failed and we were unable to recover it. 00:25:53.286 [2024-11-15 11:44:33.561231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.286 [2024-11-15 11:44:33.561362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.286 [2024-11-15 11:44:33.561388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.286 [2024-11-15 11:44:33.561403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.286 [2024-11-15 11:44:33.561416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.286 [2024-11-15 11:44:33.561445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.286 qpair failed and we were unable to recover it. 00:25:53.286 [2024-11-15 11:44:33.571250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.286 [2024-11-15 11:44:33.571398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.286 [2024-11-15 11:44:33.571427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.286 [2024-11-15 11:44:33.571442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.286 [2024-11-15 11:44:33.571455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.286 [2024-11-15 11:44:33.571487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.286 qpair failed and we were unable to recover it. 00:25:53.286 [2024-11-15 11:44:33.581279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.286 [2024-11-15 11:44:33.581374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.286 [2024-11-15 11:44:33.581400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.286 [2024-11-15 11:44:33.581420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.286 [2024-11-15 11:44:33.581433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.286 [2024-11-15 11:44:33.581463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.286 qpair failed and we were unable to recover it. 00:25:53.286 [2024-11-15 11:44:33.591294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.286 [2024-11-15 11:44:33.591385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.286 [2024-11-15 11:44:33.591411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.286 [2024-11-15 11:44:33.591425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.286 [2024-11-15 11:44:33.591438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.286 [2024-11-15 11:44:33.591471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.286 qpair failed and we were unable to recover it. 00:25:53.286 [2024-11-15 11:44:33.601580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.286 [2024-11-15 11:44:33.601677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.286 [2024-11-15 11:44:33.601704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.286 [2024-11-15 11:44:33.601718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.286 [2024-11-15 11:44:33.601732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.286 [2024-11-15 11:44:33.601762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.286 qpair failed and we were unable to recover it. 00:25:53.286 [2024-11-15 11:44:33.611414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.286 [2024-11-15 11:44:33.611535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.287 [2024-11-15 11:44:33.611560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.287 [2024-11-15 11:44:33.611575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.287 [2024-11-15 11:44:33.611588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.287 [2024-11-15 11:44:33.611630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.287 qpair failed and we were unable to recover it. 00:25:53.287 [2024-11-15 11:44:33.621503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.287 [2024-11-15 11:44:33.621595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.287 [2024-11-15 11:44:33.621621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.287 [2024-11-15 11:44:33.621635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.287 [2024-11-15 11:44:33.621650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.287 [2024-11-15 11:44:33.621687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.287 qpair failed and we were unable to recover it. 00:25:53.287 [2024-11-15 11:44:33.631476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.287 [2024-11-15 11:44:33.631563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.287 [2024-11-15 11:44:33.631589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.287 [2024-11-15 11:44:33.631603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.287 [2024-11-15 11:44:33.631617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.287 [2024-11-15 11:44:33.631648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.287 qpair failed and we were unable to recover it. 00:25:53.287 [2024-11-15 11:44:33.641449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.287 [2024-11-15 11:44:33.641535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.287 [2024-11-15 11:44:33.641561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.287 [2024-11-15 11:44:33.641575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.287 [2024-11-15 11:44:33.641588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.287 [2024-11-15 11:44:33.641619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.287 qpair failed and we were unable to recover it. 00:25:53.287 [2024-11-15 11:44:33.651481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.287 [2024-11-15 11:44:33.651572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.287 [2024-11-15 11:44:33.651597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.287 [2024-11-15 11:44:33.651611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.287 [2024-11-15 11:44:33.651625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.287 [2024-11-15 11:44:33.651657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.287 qpair failed and we were unable to recover it. 00:25:53.287 [2024-11-15 11:44:33.661502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.287 [2024-11-15 11:44:33.661595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.287 [2024-11-15 11:44:33.661620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.287 [2024-11-15 11:44:33.661635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.287 [2024-11-15 11:44:33.661648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.287 [2024-11-15 11:44:33.661677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.287 qpair failed and we were unable to recover it. 00:25:53.287 [2024-11-15 11:44:33.671564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.287 [2024-11-15 11:44:33.671695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.287 [2024-11-15 11:44:33.671720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.287 [2024-11-15 11:44:33.671734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.287 [2024-11-15 11:44:33.671747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.287 [2024-11-15 11:44:33.671776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.287 qpair failed and we were unable to recover it. 00:25:53.287 [2024-11-15 11:44:33.681625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.287 [2024-11-15 11:44:33.681704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.287 [2024-11-15 11:44:33.681730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.287 [2024-11-15 11:44:33.681744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.287 [2024-11-15 11:44:33.681757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.287 [2024-11-15 11:44:33.681787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.287 qpair failed and we were unable to recover it. 00:25:53.287 [2024-11-15 11:44:33.691623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.287 [2024-11-15 11:44:33.691719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.287 [2024-11-15 11:44:33.691744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.287 [2024-11-15 11:44:33.691758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.287 [2024-11-15 11:44:33.691772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.287 [2024-11-15 11:44:33.691801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.287 qpair failed and we were unable to recover it. 00:25:53.287 [2024-11-15 11:44:33.701599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.287 [2024-11-15 11:44:33.701685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.287 [2024-11-15 11:44:33.701711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.287 [2024-11-15 11:44:33.701725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.287 [2024-11-15 11:44:33.701738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.287 [2024-11-15 11:44:33.701768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.287 qpair failed and we were unable to recover it. 00:25:53.546 [2024-11-15 11:44:33.711636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.546 [2024-11-15 11:44:33.711719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.546 [2024-11-15 11:44:33.711752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.546 [2024-11-15 11:44:33.711768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.546 [2024-11-15 11:44:33.711782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.547 [2024-11-15 11:44:33.711813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.547 qpair failed and we were unable to recover it. 00:25:53.547 [2024-11-15 11:44:33.721669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.547 [2024-11-15 11:44:33.721757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.547 [2024-11-15 11:44:33.721784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.547 [2024-11-15 11:44:33.721799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.547 [2024-11-15 11:44:33.721812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.547 [2024-11-15 11:44:33.721842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.547 qpair failed and we were unable to recover it. 00:25:53.547 [2024-11-15 11:44:33.731742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.547 [2024-11-15 11:44:33.731832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.547 [2024-11-15 11:44:33.731858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.547 [2024-11-15 11:44:33.731872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.547 [2024-11-15 11:44:33.731885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.547 [2024-11-15 11:44:33.731916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.547 qpair failed and we were unable to recover it. 00:25:53.547 [2024-11-15 11:44:33.741711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.547 [2024-11-15 11:44:33.741794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.547 [2024-11-15 11:44:33.741820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.547 [2024-11-15 11:44:33.741834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.547 [2024-11-15 11:44:33.741847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.547 [2024-11-15 11:44:33.741891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.547 qpair failed and we were unable to recover it. 00:25:53.547 [2024-11-15 11:44:33.751743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.547 [2024-11-15 11:44:33.751829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.547 [2024-11-15 11:44:33.751854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.547 [2024-11-15 11:44:33.751868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.547 [2024-11-15 11:44:33.751890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.547 [2024-11-15 11:44:33.751922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.547 qpair failed and we were unable to recover it. 00:25:53.547 [2024-11-15 11:44:33.761775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.547 [2024-11-15 11:44:33.761898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.547 [2024-11-15 11:44:33.761924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.547 [2024-11-15 11:44:33.761939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.547 [2024-11-15 11:44:33.761951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.547 [2024-11-15 11:44:33.761983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.547 qpair failed and we were unable to recover it. 00:25:53.547 [2024-11-15 11:44:33.771838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.547 [2024-11-15 11:44:33.771926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.547 [2024-11-15 11:44:33.771951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.547 [2024-11-15 11:44:33.771965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.547 [2024-11-15 11:44:33.771978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.547 [2024-11-15 11:44:33.772007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.547 qpair failed and we were unable to recover it. 00:25:53.547 [2024-11-15 11:44:33.781857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.547 [2024-11-15 11:44:33.781948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.547 [2024-11-15 11:44:33.781974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.547 [2024-11-15 11:44:33.781988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.547 [2024-11-15 11:44:33.782001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.547 [2024-11-15 11:44:33.782031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.547 qpair failed and we were unable to recover it. 00:25:53.547 [2024-11-15 11:44:33.791916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.547 [2024-11-15 11:44:33.792023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.547 [2024-11-15 11:44:33.792051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.547 [2024-11-15 11:44:33.792066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.547 [2024-11-15 11:44:33.792078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.547 [2024-11-15 11:44:33.792109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.547 qpair failed and we were unable to recover it. 00:25:53.547 [2024-11-15 11:44:33.801876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.547 [2024-11-15 11:44:33.801952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.547 [2024-11-15 11:44:33.801978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.547 [2024-11-15 11:44:33.801992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.547 [2024-11-15 11:44:33.802005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.547 [2024-11-15 11:44:33.802047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.547 qpair failed and we were unable to recover it. 00:25:53.547 [2024-11-15 11:44:33.811935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.547 [2024-11-15 11:44:33.812021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.547 [2024-11-15 11:44:33.812047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.547 [2024-11-15 11:44:33.812061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.547 [2024-11-15 11:44:33.812074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.547 [2024-11-15 11:44:33.812105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.547 qpair failed and we were unable to recover it. 00:25:53.547 [2024-11-15 11:44:33.822050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.547 [2024-11-15 11:44:33.822185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.547 [2024-11-15 11:44:33.822214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.547 [2024-11-15 11:44:33.822229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.547 [2024-11-15 11:44:33.822242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.547 [2024-11-15 11:44:33.822272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.547 qpair failed and we were unable to recover it. 00:25:53.547 [2024-11-15 11:44:33.832012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.547 [2024-11-15 11:44:33.832100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.547 [2024-11-15 11:44:33.832125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.547 [2024-11-15 11:44:33.832139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.547 [2024-11-15 11:44:33.832152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.547 [2024-11-15 11:44:33.832182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.547 qpair failed and we were unable to recover it. 00:25:53.548 [2024-11-15 11:44:33.841991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.548 [2024-11-15 11:44:33.842070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.548 [2024-11-15 11:44:33.842101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.548 [2024-11-15 11:44:33.842116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.548 [2024-11-15 11:44:33.842129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.548 [2024-11-15 11:44:33.842160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.548 qpair failed and we were unable to recover it. 00:25:53.548 [2024-11-15 11:44:33.852033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.548 [2024-11-15 11:44:33.852157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.548 [2024-11-15 11:44:33.852182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.548 [2024-11-15 11:44:33.852196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.548 [2024-11-15 11:44:33.852209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.548 [2024-11-15 11:44:33.852240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.548 qpair failed and we were unable to recover it. 00:25:53.548 [2024-11-15 11:44:33.862103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.548 [2024-11-15 11:44:33.862207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.548 [2024-11-15 11:44:33.862234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.548 [2024-11-15 11:44:33.862248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.548 [2024-11-15 11:44:33.862260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.548 [2024-11-15 11:44:33.862291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.548 qpair failed and we were unable to recover it. 00:25:53.548 [2024-11-15 11:44:33.872077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.548 [2024-11-15 11:44:33.872161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.548 [2024-11-15 11:44:33.872186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.548 [2024-11-15 11:44:33.872201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.548 [2024-11-15 11:44:33.872214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.548 [2024-11-15 11:44:33.872243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.548 qpair failed and we were unable to recover it. 00:25:53.548 [2024-11-15 11:44:33.882119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.548 [2024-11-15 11:44:33.882215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.548 [2024-11-15 11:44:33.882241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.548 [2024-11-15 11:44:33.882254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.548 [2024-11-15 11:44:33.882273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.548 [2024-11-15 11:44:33.882313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.548 qpair failed and we were unable to recover it. 00:25:53.548 [2024-11-15 11:44:33.892147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.548 [2024-11-15 11:44:33.892237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.548 [2024-11-15 11:44:33.892265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.548 [2024-11-15 11:44:33.892280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.548 [2024-11-15 11:44:33.892293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.548 [2024-11-15 11:44:33.892334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.548 qpair failed and we were unable to recover it. 00:25:53.548 [2024-11-15 11:44:33.902172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.548 [2024-11-15 11:44:33.902256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.548 [2024-11-15 11:44:33.902282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.548 [2024-11-15 11:44:33.902296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.548 [2024-11-15 11:44:33.902318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.548 [2024-11-15 11:44:33.902349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.548 qpair failed and we were unable to recover it. 00:25:53.548 [2024-11-15 11:44:33.912204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.548 [2024-11-15 11:44:33.912287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.548 [2024-11-15 11:44:33.912320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.548 [2024-11-15 11:44:33.912336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.548 [2024-11-15 11:44:33.912349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.548 [2024-11-15 11:44:33.912379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.548 qpair failed and we were unable to recover it. 00:25:53.548 [2024-11-15 11:44:33.922260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.548 [2024-11-15 11:44:33.922354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.548 [2024-11-15 11:44:33.922380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.548 [2024-11-15 11:44:33.922395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.548 [2024-11-15 11:44:33.922407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.548 [2024-11-15 11:44:33.922437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.548 qpair failed and we were unable to recover it. 00:25:53.548 [2024-11-15 11:44:33.932270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.548 [2024-11-15 11:44:33.932371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.548 [2024-11-15 11:44:33.932397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.548 [2024-11-15 11:44:33.932411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.548 [2024-11-15 11:44:33.932424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.548 [2024-11-15 11:44:33.932454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.548 qpair failed and we were unable to recover it. 00:25:53.548 [2024-11-15 11:44:33.942278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.548 [2024-11-15 11:44:33.942373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.548 [2024-11-15 11:44:33.942398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.548 [2024-11-15 11:44:33.942412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.548 [2024-11-15 11:44:33.942425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.548 [2024-11-15 11:44:33.942455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.548 qpair failed and we were unable to recover it. 00:25:53.548 [2024-11-15 11:44:33.952318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.549 [2024-11-15 11:44:33.952407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.549 [2024-11-15 11:44:33.952432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.549 [2024-11-15 11:44:33.952446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.549 [2024-11-15 11:44:33.952459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.549 [2024-11-15 11:44:33.952489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.549 qpair failed and we were unable to recover it. 00:25:53.549 [2024-11-15 11:44:33.962341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.549 [2024-11-15 11:44:33.962426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.549 [2024-11-15 11:44:33.962452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.549 [2024-11-15 11:44:33.962466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.549 [2024-11-15 11:44:33.962479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.549 [2024-11-15 11:44:33.962508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.549 qpair failed and we were unable to recover it. 00:25:53.808 [2024-11-15 11:44:33.972391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.808 [2024-11-15 11:44:33.972498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.808 [2024-11-15 11:44:33.972527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.808 [2024-11-15 11:44:33.972541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.808 [2024-11-15 11:44:33.972555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.808 [2024-11-15 11:44:33.972586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.808 qpair failed and we were unable to recover it. 00:25:53.808 [2024-11-15 11:44:33.982426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.808 [2024-11-15 11:44:33.982550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.808 [2024-11-15 11:44:33.982578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.808 [2024-11-15 11:44:33.982592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.808 [2024-11-15 11:44:33.982605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.808 [2024-11-15 11:44:33.982637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.808 qpair failed and we were unable to recover it. 00:25:53.808 [2024-11-15 11:44:33.992414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.808 [2024-11-15 11:44:33.992503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.808 [2024-11-15 11:44:33.992529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.808 [2024-11-15 11:44:33.992543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.808 [2024-11-15 11:44:33.992556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.808 [2024-11-15 11:44:33.992588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.808 qpair failed and we were unable to recover it. 00:25:53.808 [2024-11-15 11:44:34.002489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.808 [2024-11-15 11:44:34.002579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.808 [2024-11-15 11:44:34.002605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.808 [2024-11-15 11:44:34.002619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.808 [2024-11-15 11:44:34.002632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.808 [2024-11-15 11:44:34.002662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.808 qpair failed and we were unable to recover it. 00:25:53.808 [2024-11-15 11:44:34.012517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.808 [2024-11-15 11:44:34.012610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.808 [2024-11-15 11:44:34.012636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.808 [2024-11-15 11:44:34.012657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.808 [2024-11-15 11:44:34.012671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.808 [2024-11-15 11:44:34.012701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.808 qpair failed and we were unable to recover it. 00:25:53.808 [2024-11-15 11:44:34.022508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.808 [2024-11-15 11:44:34.022594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.808 [2024-11-15 11:44:34.022619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.808 [2024-11-15 11:44:34.022633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.808 [2024-11-15 11:44:34.022647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.808 [2024-11-15 11:44:34.022676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.808 qpair failed and we were unable to recover it. 00:25:53.808 [2024-11-15 11:44:34.032549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.808 [2024-11-15 11:44:34.032634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.808 [2024-11-15 11:44:34.032659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.808 [2024-11-15 11:44:34.032673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.808 [2024-11-15 11:44:34.032686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.808 [2024-11-15 11:44:34.032717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.808 qpair failed and we were unable to recover it. 00:25:53.808 [2024-11-15 11:44:34.042694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.808 [2024-11-15 11:44:34.042782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.808 [2024-11-15 11:44:34.042808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.808 [2024-11-15 11:44:34.042822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.808 [2024-11-15 11:44:34.042835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.808 [2024-11-15 11:44:34.042865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.808 qpair failed and we were unable to recover it. 00:25:53.808 [2024-11-15 11:44:34.052714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.808 [2024-11-15 11:44:34.052804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.808 [2024-11-15 11:44:34.052829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.808 [2024-11-15 11:44:34.052843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.808 [2024-11-15 11:44:34.052856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.808 [2024-11-15 11:44:34.052900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.809 qpair failed and we were unable to recover it. 00:25:53.809 [2024-11-15 11:44:34.062872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.809 [2024-11-15 11:44:34.063001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.809 [2024-11-15 11:44:34.063027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.809 [2024-11-15 11:44:34.063041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.809 [2024-11-15 11:44:34.063056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.809 [2024-11-15 11:44:34.063086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.809 qpair failed and we were unable to recover it. 00:25:53.809 [2024-11-15 11:44:34.072667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.809 [2024-11-15 11:44:34.072751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.809 [2024-11-15 11:44:34.072776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.809 [2024-11-15 11:44:34.072791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.809 [2024-11-15 11:44:34.072804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.809 [2024-11-15 11:44:34.072835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.809 qpair failed and we were unable to recover it. 00:25:53.809 [2024-11-15 11:44:34.082790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.809 [2024-11-15 11:44:34.082876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.809 [2024-11-15 11:44:34.082901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.809 [2024-11-15 11:44:34.082916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.809 [2024-11-15 11:44:34.082929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.809 [2024-11-15 11:44:34.082959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.809 qpair failed and we were unable to recover it. 00:25:53.809 [2024-11-15 11:44:34.092771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.809 [2024-11-15 11:44:34.092887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.809 [2024-11-15 11:44:34.092913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.809 [2024-11-15 11:44:34.092927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.809 [2024-11-15 11:44:34.092940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.809 [2024-11-15 11:44:34.092970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.809 qpair failed and we were unable to recover it. 00:25:53.809 [2024-11-15 11:44:34.102809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.809 [2024-11-15 11:44:34.102940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.809 [2024-11-15 11:44:34.102969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.809 [2024-11-15 11:44:34.102986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.809 [2024-11-15 11:44:34.102999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.809 [2024-11-15 11:44:34.103030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.809 qpair failed and we were unable to recover it. 00:25:53.809 [2024-11-15 11:44:34.112783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.809 [2024-11-15 11:44:34.112906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.809 [2024-11-15 11:44:34.112932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.809 [2024-11-15 11:44:34.112946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.809 [2024-11-15 11:44:34.112959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.809 [2024-11-15 11:44:34.112990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.809 qpair failed and we were unable to recover it. 00:25:53.809 [2024-11-15 11:44:34.122821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.809 [2024-11-15 11:44:34.122916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.809 [2024-11-15 11:44:34.122942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.809 [2024-11-15 11:44:34.122957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.809 [2024-11-15 11:44:34.122970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.809 [2024-11-15 11:44:34.123001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.809 qpair failed and we were unable to recover it. 00:25:53.809 [2024-11-15 11:44:34.132867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.809 [2024-11-15 11:44:34.132955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.809 [2024-11-15 11:44:34.132982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.809 [2024-11-15 11:44:34.132996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.809 [2024-11-15 11:44:34.133008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.809 [2024-11-15 11:44:34.133050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.809 qpair failed and we were unable to recover it. 00:25:53.809 [2024-11-15 11:44:34.142923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.809 [2024-11-15 11:44:34.143043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.809 [2024-11-15 11:44:34.143075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.809 [2024-11-15 11:44:34.143090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.809 [2024-11-15 11:44:34.143103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.809 [2024-11-15 11:44:34.143144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.809 qpair failed and we were unable to recover it. 00:25:53.809 [2024-11-15 11:44:34.152911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.809 [2024-11-15 11:44:34.152990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.809 [2024-11-15 11:44:34.153016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.809 [2024-11-15 11:44:34.153031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.809 [2024-11-15 11:44:34.153044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.809 [2024-11-15 11:44:34.153075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.809 qpair failed and we were unable to recover it. 00:25:53.809 [2024-11-15 11:44:34.162996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.809 [2024-11-15 11:44:34.163081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.809 [2024-11-15 11:44:34.163107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.809 [2024-11-15 11:44:34.163121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.809 [2024-11-15 11:44:34.163134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.809 [2024-11-15 11:44:34.163164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.809 qpair failed and we were unable to recover it. 00:25:53.809 [2024-11-15 11:44:34.173002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.809 [2024-11-15 11:44:34.173091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.809 [2024-11-15 11:44:34.173117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.809 [2024-11-15 11:44:34.173131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.809 [2024-11-15 11:44:34.173144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.809 [2024-11-15 11:44:34.173174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.809 qpair failed and we were unable to recover it. 00:25:53.809 [2024-11-15 11:44:34.183112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.809 [2024-11-15 11:44:34.183212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.809 [2024-11-15 11:44:34.183238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.809 [2024-11-15 11:44:34.183252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.809 [2024-11-15 11:44:34.183265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.809 [2024-11-15 11:44:34.183301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.809 qpair failed and we were unable to recover it. 00:25:53.810 [2024-11-15 11:44:34.193027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.810 [2024-11-15 11:44:34.193106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.810 [2024-11-15 11:44:34.193131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.810 [2024-11-15 11:44:34.193145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.810 [2024-11-15 11:44:34.193158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.810 [2024-11-15 11:44:34.193187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.810 qpair failed and we were unable to recover it. 00:25:53.810 [2024-11-15 11:44:34.203057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.810 [2024-11-15 11:44:34.203138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.810 [2024-11-15 11:44:34.203164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.810 [2024-11-15 11:44:34.203178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.810 [2024-11-15 11:44:34.203191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.810 [2024-11-15 11:44:34.203221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.810 qpair failed and we were unable to recover it. 00:25:53.810 [2024-11-15 11:44:34.213129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.810 [2024-11-15 11:44:34.213216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.810 [2024-11-15 11:44:34.213241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.810 [2024-11-15 11:44:34.213256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.810 [2024-11-15 11:44:34.213269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.810 [2024-11-15 11:44:34.213298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.810 qpair failed and we were unable to recover it. 00:25:53.810 [2024-11-15 11:44:34.223170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.810 [2024-11-15 11:44:34.223255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.810 [2024-11-15 11:44:34.223281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.810 [2024-11-15 11:44:34.223295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.810 [2024-11-15 11:44:34.223313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:53.810 [2024-11-15 11:44:34.223344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:53.810 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-15 11:44:34.233276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.069 [2024-11-15 11:44:34.233371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.069 [2024-11-15 11:44:34.233399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.069 [2024-11-15 11:44:34.233414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.069 [2024-11-15 11:44:34.233428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.069 [2024-11-15 11:44:34.233461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-15 11:44:34.243225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.069 [2024-11-15 11:44:34.243327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.069 [2024-11-15 11:44:34.243354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.069 [2024-11-15 11:44:34.243369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.069 [2024-11-15 11:44:34.243382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.069 [2024-11-15 11:44:34.243414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-15 11:44:34.253235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.069 [2024-11-15 11:44:34.253335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.069 [2024-11-15 11:44:34.253361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.069 [2024-11-15 11:44:34.253376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.069 [2024-11-15 11:44:34.253389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.069 [2024-11-15 11:44:34.253419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-15 11:44:34.263239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.069 [2024-11-15 11:44:34.263331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.069 [2024-11-15 11:44:34.263357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.069 [2024-11-15 11:44:34.263371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.069 [2024-11-15 11:44:34.263384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.069 [2024-11-15 11:44:34.263415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-15 11:44:34.273353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.069 [2024-11-15 11:44:34.273486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.069 [2024-11-15 11:44:34.273517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.069 [2024-11-15 11:44:34.273532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.069 [2024-11-15 11:44:34.273545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.069 [2024-11-15 11:44:34.273577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-15 11:44:34.283317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.069 [2024-11-15 11:44:34.283401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.069 [2024-11-15 11:44:34.283426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.069 [2024-11-15 11:44:34.283440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.069 [2024-11-15 11:44:34.283453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.069 [2024-11-15 11:44:34.283484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-15 11:44:34.293408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.069 [2024-11-15 11:44:34.293543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.069 [2024-11-15 11:44:34.293569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.069 [2024-11-15 11:44:34.293583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.069 [2024-11-15 11:44:34.293596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.069 [2024-11-15 11:44:34.293627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-15 11:44:34.303375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.069 [2024-11-15 11:44:34.303464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.069 [2024-11-15 11:44:34.303490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.069 [2024-11-15 11:44:34.303504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.069 [2024-11-15 11:44:34.303517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.069 [2024-11-15 11:44:34.303549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.069 qpair failed and we were unable to recover it. 00:25:54.069 [2024-11-15 11:44:34.313410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.069 [2024-11-15 11:44:34.313494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.069 [2024-11-15 11:44:34.313520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.069 [2024-11-15 11:44:34.313534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.069 [2024-11-15 11:44:34.313553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.069 [2024-11-15 11:44:34.313583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-15 11:44:34.323428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.070 [2024-11-15 11:44:34.323546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.070 [2024-11-15 11:44:34.323572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.070 [2024-11-15 11:44:34.323587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.070 [2024-11-15 11:44:34.323600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.070 [2024-11-15 11:44:34.323632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-15 11:44:34.333497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.070 [2024-11-15 11:44:34.333621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.070 [2024-11-15 11:44:34.333646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.070 [2024-11-15 11:44:34.333660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.070 [2024-11-15 11:44:34.333673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.070 [2024-11-15 11:44:34.333702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-15 11:44:34.343521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.070 [2024-11-15 11:44:34.343606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.070 [2024-11-15 11:44:34.343635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.070 [2024-11-15 11:44:34.343650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.070 [2024-11-15 11:44:34.343663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.070 [2024-11-15 11:44:34.343692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-15 11:44:34.353541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.070 [2024-11-15 11:44:34.353628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.070 [2024-11-15 11:44:34.353653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.070 [2024-11-15 11:44:34.353667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.070 [2024-11-15 11:44:34.353680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.070 [2024-11-15 11:44:34.353710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-15 11:44:34.363623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.070 [2024-11-15 11:44:34.363704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.070 [2024-11-15 11:44:34.363730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.070 [2024-11-15 11:44:34.363744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.070 [2024-11-15 11:44:34.363757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.070 [2024-11-15 11:44:34.363787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-15 11:44:34.373588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.070 [2024-11-15 11:44:34.373681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.070 [2024-11-15 11:44:34.373707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.070 [2024-11-15 11:44:34.373721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.070 [2024-11-15 11:44:34.373734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.070 [2024-11-15 11:44:34.373765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-15 11:44:34.383623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.070 [2024-11-15 11:44:34.383709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.070 [2024-11-15 11:44:34.383735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.070 [2024-11-15 11:44:34.383749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.070 [2024-11-15 11:44:34.383762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.070 [2024-11-15 11:44:34.383792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-15 11:44:34.393655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.070 [2024-11-15 11:44:34.393788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.070 [2024-11-15 11:44:34.393813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.070 [2024-11-15 11:44:34.393827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.070 [2024-11-15 11:44:34.393840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.070 [2024-11-15 11:44:34.393870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-15 11:44:34.403671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.070 [2024-11-15 11:44:34.403804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.070 [2024-11-15 11:44:34.403835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.070 [2024-11-15 11:44:34.403850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.070 [2024-11-15 11:44:34.403864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.070 [2024-11-15 11:44:34.403894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-15 11:44:34.413691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.070 [2024-11-15 11:44:34.413779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.070 [2024-11-15 11:44:34.413805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.070 [2024-11-15 11:44:34.413819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.070 [2024-11-15 11:44:34.413832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.070 [2024-11-15 11:44:34.413862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-15 11:44:34.423802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.070 [2024-11-15 11:44:34.423885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.070 [2024-11-15 11:44:34.423910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.070 [2024-11-15 11:44:34.423925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.070 [2024-11-15 11:44:34.423938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.070 [2024-11-15 11:44:34.423969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-15 11:44:34.433774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.070 [2024-11-15 11:44:34.433852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.070 [2024-11-15 11:44:34.433876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.070 [2024-11-15 11:44:34.433890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.070 [2024-11-15 11:44:34.433902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.070 [2024-11-15 11:44:34.433931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.070 [2024-11-15 11:44:34.443823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.070 [2024-11-15 11:44:34.443925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.070 [2024-11-15 11:44:34.443951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.070 [2024-11-15 11:44:34.443971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.070 [2024-11-15 11:44:34.443985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.070 [2024-11-15 11:44:34.444015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.070 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-15 11:44:34.453831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.071 [2024-11-15 11:44:34.453920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.071 [2024-11-15 11:44:34.453945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.071 [2024-11-15 11:44:34.453960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.071 [2024-11-15 11:44:34.453973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.071 [2024-11-15 11:44:34.454001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-15 11:44:34.463825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.071 [2024-11-15 11:44:34.463912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.071 [2024-11-15 11:44:34.463938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.071 [2024-11-15 11:44:34.463951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.071 [2024-11-15 11:44:34.463965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.071 [2024-11-15 11:44:34.463995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-15 11:44:34.473891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.071 [2024-11-15 11:44:34.474003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.071 [2024-11-15 11:44:34.474028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.071 [2024-11-15 11:44:34.474042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.071 [2024-11-15 11:44:34.474055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.071 [2024-11-15 11:44:34.474086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.071 [2024-11-15 11:44:34.483880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.071 [2024-11-15 11:44:34.483969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.071 [2024-11-15 11:44:34.483994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.071 [2024-11-15 11:44:34.484008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.071 [2024-11-15 11:44:34.484022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.071 [2024-11-15 11:44:34.484052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.071 qpair failed and we were unable to recover it. 00:25:54.331 [2024-11-15 11:44:34.493952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.331 [2024-11-15 11:44:34.494044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.331 [2024-11-15 11:44:34.494072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.331 [2024-11-15 11:44:34.494086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.331 [2024-11-15 11:44:34.494099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.331 [2024-11-15 11:44:34.494130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-11-15 11:44:34.503938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.331 [2024-11-15 11:44:34.504050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.331 [2024-11-15 11:44:34.504077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.331 [2024-11-15 11:44:34.504091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.331 [2024-11-15 11:44:34.504104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.331 [2024-11-15 11:44:34.504136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-11-15 11:44:34.514063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.331 [2024-11-15 11:44:34.514150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.331 [2024-11-15 11:44:34.514177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.331 [2024-11-15 11:44:34.514191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.331 [2024-11-15 11:44:34.514204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.331 [2024-11-15 11:44:34.514236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-11-15 11:44:34.524118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.331 [2024-11-15 11:44:34.524255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.331 [2024-11-15 11:44:34.524281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.331 [2024-11-15 11:44:34.524295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.331 [2024-11-15 11:44:34.524317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.331 [2024-11-15 11:44:34.524349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-11-15 11:44:34.534034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.331 [2024-11-15 11:44:34.534134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.331 [2024-11-15 11:44:34.534160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.331 [2024-11-15 11:44:34.534174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.331 [2024-11-15 11:44:34.534187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.331 [2024-11-15 11:44:34.534218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-11-15 11:44:34.544088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.331 [2024-11-15 11:44:34.544181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.331 [2024-11-15 11:44:34.544207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.331 [2024-11-15 11:44:34.544221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.331 [2024-11-15 11:44:34.544234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.331 [2024-11-15 11:44:34.544265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-11-15 11:44:34.554114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.331 [2024-11-15 11:44:34.554235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.331 [2024-11-15 11:44:34.554264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.331 [2024-11-15 11:44:34.554280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.331 [2024-11-15 11:44:34.554293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.331 [2024-11-15 11:44:34.554337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-11-15 11:44:34.564128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.331 [2024-11-15 11:44:34.564228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.331 [2024-11-15 11:44:34.564256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.331 [2024-11-15 11:44:34.564271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.331 [2024-11-15 11:44:34.564284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.331 [2024-11-15 11:44:34.564337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-11-15 11:44:34.574182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.331 [2024-11-15 11:44:34.574272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.331 [2024-11-15 11:44:34.574298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.331 [2024-11-15 11:44:34.574328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.331 [2024-11-15 11:44:34.574343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.331 [2024-11-15 11:44:34.574373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-11-15 11:44:34.584175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.331 [2024-11-15 11:44:34.584262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.331 [2024-11-15 11:44:34.584287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.331 [2024-11-15 11:44:34.584309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.331 [2024-11-15 11:44:34.584325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.331 [2024-11-15 11:44:34.584368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-11-15 11:44:34.594238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.331 [2024-11-15 11:44:34.594358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.331 [2024-11-15 11:44:34.594384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.331 [2024-11-15 11:44:34.594398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.331 [2024-11-15 11:44:34.594411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.331 [2024-11-15 11:44:34.594453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.331 qpair failed and we were unable to recover it. 00:25:54.331 [2024-11-15 11:44:34.604226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.332 [2024-11-15 11:44:34.604315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.332 [2024-11-15 11:44:34.604341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.332 [2024-11-15 11:44:34.604355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.332 [2024-11-15 11:44:34.604368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.332 [2024-11-15 11:44:34.604397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-11-15 11:44:34.614287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.332 [2024-11-15 11:44:34.614433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.332 [2024-11-15 11:44:34.614458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.332 [2024-11-15 11:44:34.614472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.332 [2024-11-15 11:44:34.614485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.332 [2024-11-15 11:44:34.614521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-11-15 11:44:34.624315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.332 [2024-11-15 11:44:34.624396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.332 [2024-11-15 11:44:34.624421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.332 [2024-11-15 11:44:34.624435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.332 [2024-11-15 11:44:34.624448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.332 [2024-11-15 11:44:34.624478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-11-15 11:44:34.634365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.332 [2024-11-15 11:44:34.634452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.332 [2024-11-15 11:44:34.634477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.332 [2024-11-15 11:44:34.634492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.332 [2024-11-15 11:44:34.634505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.332 [2024-11-15 11:44:34.634534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-11-15 11:44:34.644371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.332 [2024-11-15 11:44:34.644494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.332 [2024-11-15 11:44:34.644519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.332 [2024-11-15 11:44:34.644533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.332 [2024-11-15 11:44:34.644546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.332 [2024-11-15 11:44:34.644575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-11-15 11:44:34.654393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.332 [2024-11-15 11:44:34.654502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.332 [2024-11-15 11:44:34.654527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.332 [2024-11-15 11:44:34.654542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.332 [2024-11-15 11:44:34.654555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.332 [2024-11-15 11:44:34.654584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-11-15 11:44:34.664422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.332 [2024-11-15 11:44:34.664532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.332 [2024-11-15 11:44:34.664557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.332 [2024-11-15 11:44:34.664572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.332 [2024-11-15 11:44:34.664586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.332 [2024-11-15 11:44:34.664616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-11-15 11:44:34.674493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.332 [2024-11-15 11:44:34.674576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.332 [2024-11-15 11:44:34.674601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.332 [2024-11-15 11:44:34.674615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.332 [2024-11-15 11:44:34.674628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.332 [2024-11-15 11:44:34.674657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-11-15 11:44:34.684454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.332 [2024-11-15 11:44:34.684536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.332 [2024-11-15 11:44:34.684561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.332 [2024-11-15 11:44:34.684575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.332 [2024-11-15 11:44:34.684588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.332 [2024-11-15 11:44:34.684620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-11-15 11:44:34.694495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.332 [2024-11-15 11:44:34.694589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.332 [2024-11-15 11:44:34.694614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.332 [2024-11-15 11:44:34.694628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.332 [2024-11-15 11:44:34.694641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.332 [2024-11-15 11:44:34.694670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-11-15 11:44:34.704553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.332 [2024-11-15 11:44:34.704674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.332 [2024-11-15 11:44:34.704705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.332 [2024-11-15 11:44:34.704720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.332 [2024-11-15 11:44:34.704733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.332 [2024-11-15 11:44:34.704763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-11-15 11:44:34.714540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.332 [2024-11-15 11:44:34.714627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.332 [2024-11-15 11:44:34.714652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.332 [2024-11-15 11:44:34.714666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.332 [2024-11-15 11:44:34.714679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.332 [2024-11-15 11:44:34.714710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-11-15 11:44:34.724588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.332 [2024-11-15 11:44:34.724717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.332 [2024-11-15 11:44:34.724742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.332 [2024-11-15 11:44:34.724756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.332 [2024-11-15 11:44:34.724770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.332 [2024-11-15 11:44:34.724799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.332 qpair failed and we were unable to recover it. 00:25:54.332 [2024-11-15 11:44:34.734605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.332 [2024-11-15 11:44:34.734696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.333 [2024-11-15 11:44:34.734720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.333 [2024-11-15 11:44:34.734734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.333 [2024-11-15 11:44:34.734747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.333 [2024-11-15 11:44:34.734778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.333 [2024-11-15 11:44:34.744636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.333 [2024-11-15 11:44:34.744721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.333 [2024-11-15 11:44:34.744746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.333 [2024-11-15 11:44:34.744761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.333 [2024-11-15 11:44:34.744774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.333 [2024-11-15 11:44:34.744810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.333 qpair failed and we were unable to recover it. 00:25:54.592 [2024-11-15 11:44:34.754668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.592 [2024-11-15 11:44:34.754756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.592 [2024-11-15 11:44:34.754784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.592 [2024-11-15 11:44:34.754799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.592 [2024-11-15 11:44:34.754812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.592 [2024-11-15 11:44:34.754842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.592 qpair failed and we were unable to recover it. 00:25:54.592 [2024-11-15 11:44:34.764681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.592 [2024-11-15 11:44:34.764768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.592 [2024-11-15 11:44:34.764795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.592 [2024-11-15 11:44:34.764809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.592 [2024-11-15 11:44:34.764822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.592 [2024-11-15 11:44:34.764855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.592 qpair failed and we were unable to recover it. 00:25:54.592 [2024-11-15 11:44:34.774774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.592 [2024-11-15 11:44:34.774867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.592 [2024-11-15 11:44:34.774893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.592 [2024-11-15 11:44:34.774907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.592 [2024-11-15 11:44:34.774921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.592 [2024-11-15 11:44:34.774951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.592 qpair failed and we were unable to recover it. 00:25:54.592 [2024-11-15 11:44:34.784743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.592 [2024-11-15 11:44:34.784860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.592 [2024-11-15 11:44:34.784886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.592 [2024-11-15 11:44:34.784901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.592 [2024-11-15 11:44:34.784914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.592 [2024-11-15 11:44:34.784946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.592 qpair failed and we were unable to recover it. 00:25:54.592 [2024-11-15 11:44:34.794839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.592 [2024-11-15 11:44:34.794926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.592 [2024-11-15 11:44:34.794952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.592 [2024-11-15 11:44:34.794966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.592 [2024-11-15 11:44:34.794979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.592 [2024-11-15 11:44:34.795010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.592 qpair failed and we were unable to recover it. 00:25:54.592 [2024-11-15 11:44:34.804804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.592 [2024-11-15 11:44:34.804885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.592 [2024-11-15 11:44:34.804911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.592 [2024-11-15 11:44:34.804925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.592 [2024-11-15 11:44:34.804938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.592 [2024-11-15 11:44:34.804969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.592 qpair failed and we were unable to recover it. 00:25:54.592 [2024-11-15 11:44:34.814823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.592 [2024-11-15 11:44:34.814909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.592 [2024-11-15 11:44:34.814935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.592 [2024-11-15 11:44:34.814949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.592 [2024-11-15 11:44:34.814963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.592 [2024-11-15 11:44:34.814992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.592 qpair failed and we were unable to recover it. 00:25:54.592 [2024-11-15 11:44:34.824868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.592 [2024-11-15 11:44:34.824956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.592 [2024-11-15 11:44:34.824981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.592 [2024-11-15 11:44:34.824995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.593 [2024-11-15 11:44:34.825008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.593 [2024-11-15 11:44:34.825038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.593 qpair failed and we were unable to recover it. 00:25:54.593 [2024-11-15 11:44:34.834854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.593 [2024-11-15 11:44:34.834990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.593 [2024-11-15 11:44:34.835020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.593 [2024-11-15 11:44:34.835035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.593 [2024-11-15 11:44:34.835048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.593 [2024-11-15 11:44:34.835078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.593 qpair failed and we were unable to recover it. 00:25:54.593 [2024-11-15 11:44:34.844891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.593 [2024-11-15 11:44:34.844969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.593 [2024-11-15 11:44:34.844995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.593 [2024-11-15 11:44:34.845009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.593 [2024-11-15 11:44:34.845022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.593 [2024-11-15 11:44:34.845053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.593 qpair failed and we were unable to recover it. 00:25:54.593 [2024-11-15 11:44:34.854937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.593 [2024-11-15 11:44:34.855060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.593 [2024-11-15 11:44:34.855086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.593 [2024-11-15 11:44:34.855100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.593 [2024-11-15 11:44:34.855114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.593 [2024-11-15 11:44:34.855144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.593 qpair failed and we were unable to recover it. 00:25:54.593 [2024-11-15 11:44:34.864967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.593 [2024-11-15 11:44:34.865054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.593 [2024-11-15 11:44:34.865081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.593 [2024-11-15 11:44:34.865096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.593 [2024-11-15 11:44:34.865111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.593 [2024-11-15 11:44:34.865143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.593 qpair failed and we were unable to recover it. 00:25:54.593 [2024-11-15 11:44:34.874994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.593 [2024-11-15 11:44:34.875108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.593 [2024-11-15 11:44:34.875134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.593 [2024-11-15 11:44:34.875148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.593 [2024-11-15 11:44:34.875167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.593 [2024-11-15 11:44:34.875198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.593 qpair failed and we were unable to recover it. 00:25:54.593 [2024-11-15 11:44:34.885034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.593 [2024-11-15 11:44:34.885130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.593 [2024-11-15 11:44:34.885156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.593 [2024-11-15 11:44:34.885170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.593 [2024-11-15 11:44:34.885183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.593 [2024-11-15 11:44:34.885214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.593 qpair failed and we were unable to recover it. 00:25:54.593 [2024-11-15 11:44:34.895089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.593 [2024-11-15 11:44:34.895202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.593 [2024-11-15 11:44:34.895227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.593 [2024-11-15 11:44:34.895242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.593 [2024-11-15 11:44:34.895255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.593 [2024-11-15 11:44:34.895286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.593 qpair failed and we were unable to recover it. 00:25:54.593 [2024-11-15 11:44:34.905118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.593 [2024-11-15 11:44:34.905203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.593 [2024-11-15 11:44:34.905228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.593 [2024-11-15 11:44:34.905243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.593 [2024-11-15 11:44:34.905258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.593 [2024-11-15 11:44:34.905289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.593 qpair failed and we were unable to recover it. 00:25:54.593 [2024-11-15 11:44:34.915104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.593 [2024-11-15 11:44:34.915188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.593 [2024-11-15 11:44:34.915214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.593 [2024-11-15 11:44:34.915229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.593 [2024-11-15 11:44:34.915242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.593 [2024-11-15 11:44:34.915273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.593 qpair failed and we were unable to recover it. 00:25:54.593 [2024-11-15 11:44:34.925138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.593 [2024-11-15 11:44:34.925221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.593 [2024-11-15 11:44:34.925246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.593 [2024-11-15 11:44:34.925260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.593 [2024-11-15 11:44:34.925273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.593 [2024-11-15 11:44:34.925310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.593 qpair failed and we were unable to recover it. 00:25:54.593 [2024-11-15 11:44:34.935196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.593 [2024-11-15 11:44:34.935317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.593 [2024-11-15 11:44:34.935344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.593 [2024-11-15 11:44:34.935359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.593 [2024-11-15 11:44:34.935372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.593 [2024-11-15 11:44:34.935405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.593 qpair failed and we were unable to recover it. 00:25:54.593 [2024-11-15 11:44:34.945235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.593 [2024-11-15 11:44:34.945350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.593 [2024-11-15 11:44:34.945377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.593 [2024-11-15 11:44:34.945391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.593 [2024-11-15 11:44:34.945406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.593 [2024-11-15 11:44:34.945449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.593 qpair failed and we were unable to recover it. 00:25:54.593 [2024-11-15 11:44:34.955326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.593 [2024-11-15 11:44:34.955419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.593 [2024-11-15 11:44:34.955444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.593 [2024-11-15 11:44:34.955459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.593 [2024-11-15 11:44:34.955472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.594 [2024-11-15 11:44:34.955502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.594 qpair failed and we were unable to recover it. 00:25:54.594 [2024-11-15 11:44:34.965355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.594 [2024-11-15 11:44:34.965469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.594 [2024-11-15 11:44:34.965500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.594 [2024-11-15 11:44:34.965515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.594 [2024-11-15 11:44:34.965528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.594 [2024-11-15 11:44:34.965559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.594 qpair failed and we were unable to recover it. 00:25:54.594 [2024-11-15 11:44:34.975333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.594 [2024-11-15 11:44:34.975438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.594 [2024-11-15 11:44:34.975466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.594 [2024-11-15 11:44:34.975485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.594 [2024-11-15 11:44:34.975499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.594 [2024-11-15 11:44:34.975541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.594 qpair failed and we were unable to recover it. 00:25:54.594 [2024-11-15 11:44:34.985297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.594 [2024-11-15 11:44:34.985394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.594 [2024-11-15 11:44:34.985420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.594 [2024-11-15 11:44:34.985435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.594 [2024-11-15 11:44:34.985448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.594 [2024-11-15 11:44:34.985477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.594 qpair failed and we were unable to recover it. 00:25:54.594 [2024-11-15 11:44:34.995367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.594 [2024-11-15 11:44:34.995454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.594 [2024-11-15 11:44:34.995482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.594 [2024-11-15 11:44:34.995497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.594 [2024-11-15 11:44:34.995511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.594 [2024-11-15 11:44:34.995542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.594 qpair failed and we were unable to recover it. 00:25:54.594 [2024-11-15 11:44:35.005426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.594 [2024-11-15 11:44:35.005520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.594 [2024-11-15 11:44:35.005546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.594 [2024-11-15 11:44:35.005568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.594 [2024-11-15 11:44:35.005582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.594 [2024-11-15 11:44:35.005613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.594 qpair failed and we were unable to recover it. 00:25:54.594 [2024-11-15 11:44:35.015431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.853 [2024-11-15 11:44:35.015525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.853 [2024-11-15 11:44:35.015553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.853 [2024-11-15 11:44:35.015575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.853 [2024-11-15 11:44:35.015599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.853 [2024-11-15 11:44:35.015641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.853 qpair failed and we were unable to recover it. 00:25:54.853 [2024-11-15 11:44:35.025425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.853 [2024-11-15 11:44:35.025512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.853 [2024-11-15 11:44:35.025539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.853 [2024-11-15 11:44:35.025554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.853 [2024-11-15 11:44:35.025567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.853 [2024-11-15 11:44:35.025597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.853 qpair failed and we were unable to recover it. 00:25:54.853 [2024-11-15 11:44:35.035461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.853 [2024-11-15 11:44:35.035582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.853 [2024-11-15 11:44:35.035608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.853 [2024-11-15 11:44:35.035623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.853 [2024-11-15 11:44:35.035637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.853 [2024-11-15 11:44:35.035668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.853 qpair failed and we were unable to recover it. 00:25:54.853 [2024-11-15 11:44:35.045514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.853 [2024-11-15 11:44:35.045619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.853 [2024-11-15 11:44:35.045645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.853 [2024-11-15 11:44:35.045659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.853 [2024-11-15 11:44:35.045673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.853 [2024-11-15 11:44:35.045702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.853 qpair failed and we were unable to recover it. 00:25:54.853 [2024-11-15 11:44:35.055560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.854 [2024-11-15 11:44:35.055651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.854 [2024-11-15 11:44:35.055676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.854 [2024-11-15 11:44:35.055690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.854 [2024-11-15 11:44:35.055703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.854 [2024-11-15 11:44:35.055734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.854 qpair failed and we were unable to recover it. 00:25:54.854 [2024-11-15 11:44:35.065555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.854 [2024-11-15 11:44:35.065643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.854 [2024-11-15 11:44:35.065668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.854 [2024-11-15 11:44:35.065682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.854 [2024-11-15 11:44:35.065695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.854 [2024-11-15 11:44:35.065725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.854 qpair failed and we were unable to recover it. 00:25:54.854 [2024-11-15 11:44:35.075638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.854 [2024-11-15 11:44:35.075733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.854 [2024-11-15 11:44:35.075759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.854 [2024-11-15 11:44:35.075773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.854 [2024-11-15 11:44:35.075787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.854 [2024-11-15 11:44:35.075816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.854 qpair failed and we were unable to recover it. 00:25:54.854 [2024-11-15 11:44:35.085625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.854 [2024-11-15 11:44:35.085707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.854 [2024-11-15 11:44:35.085733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.854 [2024-11-15 11:44:35.085747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.854 [2024-11-15 11:44:35.085760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.854 [2024-11-15 11:44:35.085792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.854 qpair failed and we were unable to recover it. 00:25:54.854 [2024-11-15 11:44:35.095642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.854 [2024-11-15 11:44:35.095767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.854 [2024-11-15 11:44:35.095792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.854 [2024-11-15 11:44:35.095807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.854 [2024-11-15 11:44:35.095820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.854 [2024-11-15 11:44:35.095850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.854 qpair failed and we were unable to recover it. 00:25:54.854 [2024-11-15 11:44:35.105684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.854 [2024-11-15 11:44:35.105763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.854 [2024-11-15 11:44:35.105788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.854 [2024-11-15 11:44:35.105803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.854 [2024-11-15 11:44:35.105816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.854 [2024-11-15 11:44:35.105847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.854 qpair failed and we were unable to recover it. 00:25:54.854 [2024-11-15 11:44:35.115662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.854 [2024-11-15 11:44:35.115743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.854 [2024-11-15 11:44:35.115769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.854 [2024-11-15 11:44:35.115783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.854 [2024-11-15 11:44:35.115797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.854 [2024-11-15 11:44:35.115827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.854 qpair failed and we were unable to recover it. 00:25:54.854 [2024-11-15 11:44:35.125708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.854 [2024-11-15 11:44:35.125792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.854 [2024-11-15 11:44:35.125817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.854 [2024-11-15 11:44:35.125832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.854 [2024-11-15 11:44:35.125845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.854 [2024-11-15 11:44:35.125874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.854 qpair failed and we were unable to recover it. 00:25:54.854 [2024-11-15 11:44:35.135744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.854 [2024-11-15 11:44:35.135834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.854 [2024-11-15 11:44:35.135860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.854 [2024-11-15 11:44:35.135881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.854 [2024-11-15 11:44:35.135894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.854 [2024-11-15 11:44:35.135925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.854 qpair failed and we were unable to recover it. 00:25:54.854 [2024-11-15 11:44:35.145730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.854 [2024-11-15 11:44:35.145821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.854 [2024-11-15 11:44:35.145846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.854 [2024-11-15 11:44:35.145860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.854 [2024-11-15 11:44:35.145873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.854 [2024-11-15 11:44:35.145904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.854 qpair failed and we were unable to recover it. 00:25:54.854 [2024-11-15 11:44:35.155829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.854 [2024-11-15 11:44:35.155946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.854 [2024-11-15 11:44:35.155970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.854 [2024-11-15 11:44:35.155984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.854 [2024-11-15 11:44:35.155999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.854 [2024-11-15 11:44:35.156029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.854 qpair failed and we were unable to recover it. 00:25:54.854 [2024-11-15 11:44:35.165777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.854 [2024-11-15 11:44:35.165861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.854 [2024-11-15 11:44:35.165886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.854 [2024-11-15 11:44:35.165900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.854 [2024-11-15 11:44:35.165913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.854 [2024-11-15 11:44:35.165943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.854 qpair failed and we were unable to recover it. 00:25:54.854 [2024-11-15 11:44:35.175841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.854 [2024-11-15 11:44:35.175931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.854 [2024-11-15 11:44:35.175956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.854 [2024-11-15 11:44:35.175970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.854 [2024-11-15 11:44:35.175983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.854 [2024-11-15 11:44:35.176019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.854 qpair failed and we were unable to recover it. 00:25:54.854 [2024-11-15 11:44:35.185904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.854 [2024-11-15 11:44:35.185992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.855 [2024-11-15 11:44:35.186018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.855 [2024-11-15 11:44:35.186032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.855 [2024-11-15 11:44:35.186045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.855 [2024-11-15 11:44:35.186076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.855 qpair failed and we were unable to recover it. 00:25:54.855 [2024-11-15 11:44:35.195885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.855 [2024-11-15 11:44:35.195994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.855 [2024-11-15 11:44:35.196020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.855 [2024-11-15 11:44:35.196034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.855 [2024-11-15 11:44:35.196047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.855 [2024-11-15 11:44:35.196076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.855 qpair failed and we were unable to recover it. 00:25:54.855 [2024-11-15 11:44:35.205914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.855 [2024-11-15 11:44:35.205993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.855 [2024-11-15 11:44:35.206018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.855 [2024-11-15 11:44:35.206032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.855 [2024-11-15 11:44:35.206045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.855 [2024-11-15 11:44:35.206087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.855 qpair failed and we were unable to recover it. 00:25:54.855 [2024-11-15 11:44:35.215990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.855 [2024-11-15 11:44:35.216099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.855 [2024-11-15 11:44:35.216125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.855 [2024-11-15 11:44:35.216140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.855 [2024-11-15 11:44:35.216153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.855 [2024-11-15 11:44:35.216184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.855 qpair failed and we were unable to recover it. 00:25:54.855 [2024-11-15 11:44:35.225992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.855 [2024-11-15 11:44:35.226121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.855 [2024-11-15 11:44:35.226147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.855 [2024-11-15 11:44:35.226161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.855 [2024-11-15 11:44:35.226174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.855 [2024-11-15 11:44:35.226205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.855 qpair failed and we were unable to recover it. 00:25:54.855 [2024-11-15 11:44:35.236009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.855 [2024-11-15 11:44:35.236132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.855 [2024-11-15 11:44:35.236157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.855 [2024-11-15 11:44:35.236171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.855 [2024-11-15 11:44:35.236186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.855 [2024-11-15 11:44:35.236217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.855 qpair failed and we were unable to recover it. 00:25:54.855 [2024-11-15 11:44:35.246047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.855 [2024-11-15 11:44:35.246130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.855 [2024-11-15 11:44:35.246155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.855 [2024-11-15 11:44:35.246169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.855 [2024-11-15 11:44:35.246182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.855 [2024-11-15 11:44:35.246214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.855 qpair failed and we were unable to recover it. 00:25:54.855 [2024-11-15 11:44:35.256094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.855 [2024-11-15 11:44:35.256196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.855 [2024-11-15 11:44:35.256222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.855 [2024-11-15 11:44:35.256236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.855 [2024-11-15 11:44:35.256249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.855 [2024-11-15 11:44:35.256280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.855 qpair failed and we were unable to recover it. 00:25:54.855 [2024-11-15 11:44:35.266168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.855 [2024-11-15 11:44:35.266267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.855 [2024-11-15 11:44:35.266300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.855 [2024-11-15 11:44:35.266324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.855 [2024-11-15 11:44:35.266337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.855 [2024-11-15 11:44:35.266367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.855 qpair failed and we were unable to recover it. 00:25:54.855 [2024-11-15 11:44:35.276105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.855 [2024-11-15 11:44:35.276201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.855 [2024-11-15 11:44:35.276236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.855 [2024-11-15 11:44:35.276258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.855 [2024-11-15 11:44:35.276271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:54.855 [2024-11-15 11:44:35.276318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:54.855 qpair failed and we were unable to recover it. 00:25:55.114 [2024-11-15 11:44:35.286145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.114 [2024-11-15 11:44:35.286239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.114 [2024-11-15 11:44:35.286267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.114 [2024-11-15 11:44:35.286282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.114 [2024-11-15 11:44:35.286295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.114 [2024-11-15 11:44:35.286351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.114 qpair failed and we were unable to recover it. 00:25:55.114 [2024-11-15 11:44:35.296274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.115 [2024-11-15 11:44:35.296379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.115 [2024-11-15 11:44:35.296405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.115 [2024-11-15 11:44:35.296420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.115 [2024-11-15 11:44:35.296433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.115 [2024-11-15 11:44:35.296463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.115 qpair failed and we were unable to recover it. 00:25:55.115 [2024-11-15 11:44:35.306217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.115 [2024-11-15 11:44:35.306310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.115 [2024-11-15 11:44:35.306336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.115 [2024-11-15 11:44:35.306350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.115 [2024-11-15 11:44:35.306368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.115 [2024-11-15 11:44:35.306401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.115 qpair failed and we were unable to recover it. 00:25:55.115 [2024-11-15 11:44:35.316218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.115 [2024-11-15 11:44:35.316298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.115 [2024-11-15 11:44:35.316334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.115 [2024-11-15 11:44:35.316349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.115 [2024-11-15 11:44:35.316362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.115 [2024-11-15 11:44:35.316395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.115 qpair failed and we were unable to recover it. 00:25:55.115 [2024-11-15 11:44:35.326249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.115 [2024-11-15 11:44:35.326343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.115 [2024-11-15 11:44:35.326369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.115 [2024-11-15 11:44:35.326384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.115 [2024-11-15 11:44:35.326396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.115 [2024-11-15 11:44:35.326431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.115 qpair failed and we were unable to recover it. 00:25:55.115 [2024-11-15 11:44:35.336278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.115 [2024-11-15 11:44:35.336380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.115 [2024-11-15 11:44:35.336407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.115 [2024-11-15 11:44:35.336421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.115 [2024-11-15 11:44:35.336434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.115 [2024-11-15 11:44:35.336464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.115 qpair failed and we were unable to recover it. 00:25:55.115 [2024-11-15 11:44:35.346311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.115 [2024-11-15 11:44:35.346408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.115 [2024-11-15 11:44:35.346434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.115 [2024-11-15 11:44:35.346448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.115 [2024-11-15 11:44:35.346463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.115 [2024-11-15 11:44:35.346493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.115 qpair failed and we were unable to recover it. 00:25:55.115 [2024-11-15 11:44:35.356329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.115 [2024-11-15 11:44:35.356416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.115 [2024-11-15 11:44:35.356441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.115 [2024-11-15 11:44:35.356456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.115 [2024-11-15 11:44:35.356468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.115 [2024-11-15 11:44:35.356500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.115 qpair failed and we were unable to recover it. 00:25:55.115 [2024-11-15 11:44:35.366374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.115 [2024-11-15 11:44:35.366502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.115 [2024-11-15 11:44:35.366527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.115 [2024-11-15 11:44:35.366542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.115 [2024-11-15 11:44:35.366555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.115 [2024-11-15 11:44:35.366585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.115 qpair failed and we were unable to recover it. 00:25:55.115 [2024-11-15 11:44:35.376410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.115 [2024-11-15 11:44:35.376499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.115 [2024-11-15 11:44:35.376524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.115 [2024-11-15 11:44:35.376538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.115 [2024-11-15 11:44:35.376551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.115 [2024-11-15 11:44:35.376581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.115 qpair failed and we were unable to recover it. 00:25:55.115 [2024-11-15 11:44:35.386444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.115 [2024-11-15 11:44:35.386571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.115 [2024-11-15 11:44:35.386596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.115 [2024-11-15 11:44:35.386610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.115 [2024-11-15 11:44:35.386623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.115 [2024-11-15 11:44:35.386654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.115 qpair failed and we were unable to recover it. 00:25:55.115 [2024-11-15 11:44:35.396461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.115 [2024-11-15 11:44:35.396542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.115 [2024-11-15 11:44:35.396575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.115 [2024-11-15 11:44:35.396591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.115 [2024-11-15 11:44:35.396604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.115 [2024-11-15 11:44:35.396635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.115 qpair failed and we were unable to recover it. 00:25:55.115 [2024-11-15 11:44:35.406474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.115 [2024-11-15 11:44:35.406555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.115 [2024-11-15 11:44:35.406580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.116 [2024-11-15 11:44:35.406594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.116 [2024-11-15 11:44:35.406607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.116 [2024-11-15 11:44:35.406636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.116 qpair failed and we were unable to recover it. 00:25:55.116 [2024-11-15 11:44:35.416518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.116 [2024-11-15 11:44:35.416605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.116 [2024-11-15 11:44:35.416630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.116 [2024-11-15 11:44:35.416645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.116 [2024-11-15 11:44:35.416657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.116 [2024-11-15 11:44:35.416688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.116 qpair failed and we were unable to recover it. 00:25:55.116 [2024-11-15 11:44:35.426564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.116 [2024-11-15 11:44:35.426652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.116 [2024-11-15 11:44:35.426677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.116 [2024-11-15 11:44:35.426691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.116 [2024-11-15 11:44:35.426704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.116 [2024-11-15 11:44:35.426733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.116 qpair failed and we were unable to recover it. 00:25:55.116 [2024-11-15 11:44:35.436658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.116 [2024-11-15 11:44:35.436739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.116 [2024-11-15 11:44:35.436763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.116 [2024-11-15 11:44:35.436776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.116 [2024-11-15 11:44:35.436794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.116 [2024-11-15 11:44:35.436823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.116 qpair failed and we were unable to recover it. 00:25:55.116 [2024-11-15 11:44:35.446623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.116 [2024-11-15 11:44:35.446706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.116 [2024-11-15 11:44:35.446731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.116 [2024-11-15 11:44:35.446745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.116 [2024-11-15 11:44:35.446758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.116 [2024-11-15 11:44:35.446788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.116 qpair failed and we were unable to recover it. 00:25:55.116 [2024-11-15 11:44:35.456660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.116 [2024-11-15 11:44:35.456746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.116 [2024-11-15 11:44:35.456772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.116 [2024-11-15 11:44:35.456785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.116 [2024-11-15 11:44:35.456798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.116 [2024-11-15 11:44:35.456828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.116 qpair failed and we were unable to recover it. 00:25:55.116 [2024-11-15 11:44:35.466673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.116 [2024-11-15 11:44:35.466753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.116 [2024-11-15 11:44:35.466778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.116 [2024-11-15 11:44:35.466792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.116 [2024-11-15 11:44:35.466805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.116 [2024-11-15 11:44:35.466835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.116 qpair failed and we were unable to recover it. 00:25:55.116 [2024-11-15 11:44:35.476748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.116 [2024-11-15 11:44:35.476835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.116 [2024-11-15 11:44:35.476862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.116 [2024-11-15 11:44:35.476880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.116 [2024-11-15 11:44:35.476894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.116 [2024-11-15 11:44:35.476924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.116 qpair failed and we were unable to recover it. 00:25:55.116 [2024-11-15 11:44:35.486713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.116 [2024-11-15 11:44:35.486803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.116 [2024-11-15 11:44:35.486828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.116 [2024-11-15 11:44:35.486842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.116 [2024-11-15 11:44:35.486856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.116 [2024-11-15 11:44:35.486885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.116 qpair failed and we were unable to recover it. 00:25:55.116 [2024-11-15 11:44:35.496798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.116 [2024-11-15 11:44:35.496890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.116 [2024-11-15 11:44:35.496918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.116 [2024-11-15 11:44:35.496933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.116 [2024-11-15 11:44:35.496947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.116 [2024-11-15 11:44:35.496977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.116 qpair failed and we were unable to recover it. 00:25:55.116 [2024-11-15 11:44:35.506849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.116 [2024-11-15 11:44:35.506931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.116 [2024-11-15 11:44:35.506957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.116 [2024-11-15 11:44:35.506971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.116 [2024-11-15 11:44:35.506984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.116 [2024-11-15 11:44:35.507013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.116 qpair failed and we were unable to recover it. 00:25:55.116 [2024-11-15 11:44:35.516844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.116 [2024-11-15 11:44:35.516924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.116 [2024-11-15 11:44:35.516950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.116 [2024-11-15 11:44:35.516964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.116 [2024-11-15 11:44:35.516977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.116 [2024-11-15 11:44:35.517007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.116 qpair failed and we were unable to recover it. 00:25:55.116 [2024-11-15 11:44:35.526862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.117 [2024-11-15 11:44:35.526942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.117 [2024-11-15 11:44:35.526973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.117 [2024-11-15 11:44:35.526988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.117 [2024-11-15 11:44:35.527001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.117 [2024-11-15 11:44:35.527031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.117 qpair failed and we were unable to recover it. 00:25:55.117 [2024-11-15 11:44:35.536905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.117 [2024-11-15 11:44:35.536994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.117 [2024-11-15 11:44:35.537021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.117 [2024-11-15 11:44:35.537035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.117 [2024-11-15 11:44:35.537049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.117 [2024-11-15 11:44:35.537080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.117 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-15 11:44:35.546934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.376 [2024-11-15 11:44:35.547024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.376 [2024-11-15 11:44:35.547051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.376 [2024-11-15 11:44:35.547065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.376 [2024-11-15 11:44:35.547078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.376 [2024-11-15 11:44:35.547109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-15 11:44:35.556972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.376 [2024-11-15 11:44:35.557067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.376 [2024-11-15 11:44:35.557093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.376 [2024-11-15 11:44:35.557108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.376 [2024-11-15 11:44:35.557121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.376 [2024-11-15 11:44:35.557151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-15 11:44:35.566960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.376 [2024-11-15 11:44:35.567090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.376 [2024-11-15 11:44:35.567116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.376 [2024-11-15 11:44:35.567136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.376 [2024-11-15 11:44:35.567149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.376 [2024-11-15 11:44:35.567180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-15 11:44:35.577014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.376 [2024-11-15 11:44:35.577146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.376 [2024-11-15 11:44:35.577171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.376 [2024-11-15 11:44:35.577185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.376 [2024-11-15 11:44:35.577198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.376 [2024-11-15 11:44:35.577229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-15 11:44:35.587103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.376 [2024-11-15 11:44:35.587188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.376 [2024-11-15 11:44:35.587213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.376 [2024-11-15 11:44:35.587227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.376 [2024-11-15 11:44:35.587240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.376 [2024-11-15 11:44:35.587270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-15 11:44:35.597048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.376 [2024-11-15 11:44:35.597131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.376 [2024-11-15 11:44:35.597160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.376 [2024-11-15 11:44:35.597175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.376 [2024-11-15 11:44:35.597188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.376 [2024-11-15 11:44:35.597217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-15 11:44:35.607078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.376 [2024-11-15 11:44:35.607158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.376 [2024-11-15 11:44:35.607186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.376 [2024-11-15 11:44:35.607202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.376 [2024-11-15 11:44:35.607216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.376 [2024-11-15 11:44:35.607247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-15 11:44:35.617098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.376 [2024-11-15 11:44:35.617195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.376 [2024-11-15 11:44:35.617220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.376 [2024-11-15 11:44:35.617235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.376 [2024-11-15 11:44:35.617258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.376 [2024-11-15 11:44:35.617288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-15 11:44:35.627136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.376 [2024-11-15 11:44:35.627222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.376 [2024-11-15 11:44:35.627248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.376 [2024-11-15 11:44:35.627262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.376 [2024-11-15 11:44:35.627275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.376 [2024-11-15 11:44:35.627312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-15 11:44:35.637183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.376 [2024-11-15 11:44:35.637267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.376 [2024-11-15 11:44:35.637292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.376 [2024-11-15 11:44:35.637443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.376 [2024-11-15 11:44:35.637468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.376 [2024-11-15 11:44:35.637524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.376 qpair failed and we were unable to recover it. 00:25:55.376 [2024-11-15 11:44:35.647208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.377 [2024-11-15 11:44:35.647293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.377 [2024-11-15 11:44:35.647328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.377 [2024-11-15 11:44:35.647344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.377 [2024-11-15 11:44:35.647357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.377 [2024-11-15 11:44:35.647387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-15 11:44:35.657235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.377 [2024-11-15 11:44:35.657338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.377 [2024-11-15 11:44:35.657367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.377 [2024-11-15 11:44:35.657382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.377 [2024-11-15 11:44:35.657395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.377 [2024-11-15 11:44:35.657426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-15 11:44:35.667240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.377 [2024-11-15 11:44:35.667334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.377 [2024-11-15 11:44:35.667361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.377 [2024-11-15 11:44:35.667375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.377 [2024-11-15 11:44:35.667388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.377 [2024-11-15 11:44:35.667420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-15 11:44:35.677274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.377 [2024-11-15 11:44:35.677364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.377 [2024-11-15 11:44:35.677390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.377 [2024-11-15 11:44:35.677405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.377 [2024-11-15 11:44:35.677417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.377 [2024-11-15 11:44:35.677446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-15 11:44:35.687284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.377 [2024-11-15 11:44:35.687375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.377 [2024-11-15 11:44:35.687400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.377 [2024-11-15 11:44:35.687415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.377 [2024-11-15 11:44:35.687427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.377 [2024-11-15 11:44:35.687459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-15 11:44:35.697366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.377 [2024-11-15 11:44:35.697458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.377 [2024-11-15 11:44:35.697483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.377 [2024-11-15 11:44:35.697504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.377 [2024-11-15 11:44:35.697518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.377 [2024-11-15 11:44:35.697548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-15 11:44:35.707356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.377 [2024-11-15 11:44:35.707482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.377 [2024-11-15 11:44:35.707508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.377 [2024-11-15 11:44:35.707522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.377 [2024-11-15 11:44:35.707536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.377 [2024-11-15 11:44:35.707567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-15 11:44:35.717377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.377 [2024-11-15 11:44:35.717463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.377 [2024-11-15 11:44:35.717489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.377 [2024-11-15 11:44:35.717503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.377 [2024-11-15 11:44:35.717516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.377 [2024-11-15 11:44:35.717547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-15 11:44:35.727427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.377 [2024-11-15 11:44:35.727552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.377 [2024-11-15 11:44:35.727577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.377 [2024-11-15 11:44:35.727591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.377 [2024-11-15 11:44:35.727604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.377 [2024-11-15 11:44:35.727634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-15 11:44:35.737477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.377 [2024-11-15 11:44:35.737584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.377 [2024-11-15 11:44:35.737614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.377 [2024-11-15 11:44:35.737629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.377 [2024-11-15 11:44:35.737642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.377 [2024-11-15 11:44:35.737679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-15 11:44:35.747471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.377 [2024-11-15 11:44:35.747554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.377 [2024-11-15 11:44:35.747580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.377 [2024-11-15 11:44:35.747594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.377 [2024-11-15 11:44:35.747607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.377 [2024-11-15 11:44:35.747637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-15 11:44:35.757514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.377 [2024-11-15 11:44:35.757605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.377 [2024-11-15 11:44:35.757630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.377 [2024-11-15 11:44:35.757644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.377 [2024-11-15 11:44:35.757657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.377 [2024-11-15 11:44:35.757688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-15 11:44:35.767546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.377 [2024-11-15 11:44:35.767625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.377 [2024-11-15 11:44:35.767650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.377 [2024-11-15 11:44:35.767665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.377 [2024-11-15 11:44:35.767678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.377 [2024-11-15 11:44:35.767707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.377 qpair failed and we were unable to recover it. 00:25:55.377 [2024-11-15 11:44:35.777605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.378 [2024-11-15 11:44:35.777696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.378 [2024-11-15 11:44:35.777722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.378 [2024-11-15 11:44:35.777737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.378 [2024-11-15 11:44:35.777753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.378 [2024-11-15 11:44:35.777785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-15 11:44:35.787641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.378 [2024-11-15 11:44:35.787724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.378 [2024-11-15 11:44:35.787749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.378 [2024-11-15 11:44:35.787763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.378 [2024-11-15 11:44:35.787777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.378 [2024-11-15 11:44:35.787806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.378 [2024-11-15 11:44:35.797709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.378 [2024-11-15 11:44:35.797813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.378 [2024-11-15 11:44:35.797840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.378 [2024-11-15 11:44:35.797854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.378 [2024-11-15 11:44:35.797868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.378 [2024-11-15 11:44:35.797898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.378 qpair failed and we were unable to recover it. 00:25:55.637 [2024-11-15 11:44:35.807646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.637 [2024-11-15 11:44:35.807732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.637 [2024-11-15 11:44:35.807759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.637 [2024-11-15 11:44:35.807774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.637 [2024-11-15 11:44:35.807787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.637 [2024-11-15 11:44:35.807819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.637 qpair failed and we were unable to recover it. 00:25:55.637 [2024-11-15 11:44:35.817674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.637 [2024-11-15 11:44:35.817781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.637 [2024-11-15 11:44:35.817806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.637 [2024-11-15 11:44:35.817820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.637 [2024-11-15 11:44:35.817833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.637 [2024-11-15 11:44:35.817863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.637 qpair failed and we were unable to recover it. 00:25:55.637 [2024-11-15 11:44:35.827744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.637 [2024-11-15 11:44:35.827842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.637 [2024-11-15 11:44:35.827874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.637 [2024-11-15 11:44:35.827889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.637 [2024-11-15 11:44:35.827903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.637 [2024-11-15 11:44:35.827933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.637 qpair failed and we were unable to recover it. 00:25:55.637 [2024-11-15 11:44:35.837727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.637 [2024-11-15 11:44:35.837807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.637 [2024-11-15 11:44:35.837833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.637 [2024-11-15 11:44:35.837847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.637 [2024-11-15 11:44:35.837860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.637 [2024-11-15 11:44:35.837890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.637 qpair failed and we were unable to recover it. 00:25:55.637 [2024-11-15 11:44:35.847755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.637 [2024-11-15 11:44:35.847868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.637 [2024-11-15 11:44:35.847894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.637 [2024-11-15 11:44:35.847908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.637 [2024-11-15 11:44:35.847921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.637 [2024-11-15 11:44:35.847950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.637 qpair failed and we were unable to recover it. 00:25:55.637 [2024-11-15 11:44:35.857867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.637 [2024-11-15 11:44:35.857960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.637 [2024-11-15 11:44:35.857987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.637 [2024-11-15 11:44:35.858001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.637 [2024-11-15 11:44:35.858014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.637 [2024-11-15 11:44:35.858045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.637 qpair failed and we were unable to recover it. 00:25:55.637 [2024-11-15 11:44:35.867838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.637 [2024-11-15 11:44:35.867953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.637 [2024-11-15 11:44:35.867978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.637 [2024-11-15 11:44:35.867993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.637 [2024-11-15 11:44:35.868011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.637 [2024-11-15 11:44:35.868045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.637 qpair failed and we were unable to recover it. 00:25:55.637 [2024-11-15 11:44:35.877886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.637 [2024-11-15 11:44:35.877980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.637 [2024-11-15 11:44:35.878005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.637 [2024-11-15 11:44:35.878020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.637 [2024-11-15 11:44:35.878033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.637 [2024-11-15 11:44:35.878063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.637 qpair failed and we were unable to recover it. 00:25:55.637 [2024-11-15 11:44:35.887884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.637 [2024-11-15 11:44:35.887967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.637 [2024-11-15 11:44:35.887992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.637 [2024-11-15 11:44:35.888007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.637 [2024-11-15 11:44:35.888020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.638 [2024-11-15 11:44:35.888049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.638 qpair failed and we were unable to recover it. 00:25:55.638 [2024-11-15 11:44:35.898030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.638 [2024-11-15 11:44:35.898129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.638 [2024-11-15 11:44:35.898154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.638 [2024-11-15 11:44:35.898168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.638 [2024-11-15 11:44:35.898182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.638 [2024-11-15 11:44:35.898211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.638 qpair failed and we were unable to recover it. 00:25:55.638 [2024-11-15 11:44:35.907955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.638 [2024-11-15 11:44:35.908043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.638 [2024-11-15 11:44:35.908069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.638 [2024-11-15 11:44:35.908083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.638 [2024-11-15 11:44:35.908096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.638 [2024-11-15 11:44:35.908128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.638 qpair failed and we were unable to recover it. 00:25:55.638 [2024-11-15 11:44:35.918087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.638 [2024-11-15 11:44:35.918210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.638 [2024-11-15 11:44:35.918239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.638 [2024-11-15 11:44:35.918256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.638 [2024-11-15 11:44:35.918269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.638 [2024-11-15 11:44:35.918300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.638 qpair failed and we were unable to recover it. 00:25:55.638 [2024-11-15 11:44:35.927981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.638 [2024-11-15 11:44:35.928067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.638 [2024-11-15 11:44:35.928093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.638 [2024-11-15 11:44:35.928108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.638 [2024-11-15 11:44:35.928121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.638 [2024-11-15 11:44:35.928151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.638 qpair failed and we were unable to recover it. 00:25:55.638 [2024-11-15 11:44:35.938036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.638 [2024-11-15 11:44:35.938125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.638 [2024-11-15 11:44:35.938150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.638 [2024-11-15 11:44:35.938165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.638 [2024-11-15 11:44:35.938178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.638 [2024-11-15 11:44:35.938208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.638 qpair failed and we were unable to recover it. 00:25:55.638 [2024-11-15 11:44:35.948142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.638 [2024-11-15 11:44:35.948228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.638 [2024-11-15 11:44:35.948256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.638 [2024-11-15 11:44:35.948273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.638 [2024-11-15 11:44:35.948287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.638 [2024-11-15 11:44:35.948327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.638 qpair failed and we were unable to recover it. 00:25:55.638 [2024-11-15 11:44:35.958086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.638 [2024-11-15 11:44:35.958209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.638 [2024-11-15 11:44:35.958242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.638 [2024-11-15 11:44:35.958257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.638 [2024-11-15 11:44:35.958270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.638 [2024-11-15 11:44:35.958300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.638 qpair failed and we were unable to recover it. 00:25:55.638 [2024-11-15 11:44:35.968125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.638 [2024-11-15 11:44:35.968213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.638 [2024-11-15 11:44:35.968239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.638 [2024-11-15 11:44:35.968253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.638 [2024-11-15 11:44:35.968266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.638 [2024-11-15 11:44:35.968296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.638 qpair failed and we were unable to recover it. 00:25:55.638 [2024-11-15 11:44:35.978140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.638 [2024-11-15 11:44:35.978230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.638 [2024-11-15 11:44:35.978255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.638 [2024-11-15 11:44:35.978269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.638 [2024-11-15 11:44:35.978282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.638 [2024-11-15 11:44:35.978333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.638 qpair failed and we were unable to recover it. 00:25:55.638 [2024-11-15 11:44:35.988182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.638 [2024-11-15 11:44:35.988263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.638 [2024-11-15 11:44:35.988290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.638 [2024-11-15 11:44:35.988311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.638 [2024-11-15 11:44:35.988325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.638 [2024-11-15 11:44:35.988355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.638 qpair failed and we were unable to recover it. 00:25:55.638 [2024-11-15 11:44:35.998270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.638 [2024-11-15 11:44:35.998374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.638 [2024-11-15 11:44:35.998400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.638 [2024-11-15 11:44:35.998414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.638 [2024-11-15 11:44:35.998433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.638 [2024-11-15 11:44:35.998464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.638 qpair failed and we were unable to recover it. 00:25:55.638 [2024-11-15 11:44:36.008245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.638 [2024-11-15 11:44:36.008372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.638 [2024-11-15 11:44:36.008398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.638 [2024-11-15 11:44:36.008412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.638 [2024-11-15 11:44:36.008426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.638 [2024-11-15 11:44:36.008457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.638 qpair failed and we were unable to recover it. 00:25:55.638 [2024-11-15 11:44:36.018270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.638 [2024-11-15 11:44:36.018369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.638 [2024-11-15 11:44:36.018395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.638 [2024-11-15 11:44:36.018409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.638 [2024-11-15 11:44:36.018423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.638 [2024-11-15 11:44:36.018453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.638 qpair failed and we were unable to recover it. 00:25:55.639 [2024-11-15 11:44:36.028278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.639 [2024-11-15 11:44:36.028382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.639 [2024-11-15 11:44:36.028408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.639 [2024-11-15 11:44:36.028423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.639 [2024-11-15 11:44:36.028436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.639 [2024-11-15 11:44:36.028467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.639 qpair failed and we were unable to recover it. 00:25:55.639 [2024-11-15 11:44:36.038300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.639 [2024-11-15 11:44:36.038394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.639 [2024-11-15 11:44:36.038420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.639 [2024-11-15 11:44:36.038435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.639 [2024-11-15 11:44:36.038448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.639 [2024-11-15 11:44:36.038479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.639 qpair failed and we were unable to recover it. 00:25:55.639 [2024-11-15 11:44:36.048328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.639 [2024-11-15 11:44:36.048445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.639 [2024-11-15 11:44:36.048474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.639 [2024-11-15 11:44:36.048489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.639 [2024-11-15 11:44:36.048503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.639 [2024-11-15 11:44:36.048533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.639 qpair failed and we were unable to recover it. 00:25:55.639 [2024-11-15 11:44:36.058371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.639 [2024-11-15 11:44:36.058468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.639 [2024-11-15 11:44:36.058502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.639 [2024-11-15 11:44:36.058525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.639 [2024-11-15 11:44:36.058539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.639 [2024-11-15 11:44:36.058571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.639 qpair failed and we were unable to recover it. 00:25:55.900 [2024-11-15 11:44:36.068408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.900 [2024-11-15 11:44:36.068495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.900 [2024-11-15 11:44:36.068527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.900 [2024-11-15 11:44:36.068551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.900 [2024-11-15 11:44:36.068568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.900 [2024-11-15 11:44:36.068609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.900 qpair failed and we were unable to recover it. 00:25:55.900 [2024-11-15 11:44:36.078408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.900 [2024-11-15 11:44:36.078496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.900 [2024-11-15 11:44:36.078525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.900 [2024-11-15 11:44:36.078540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.900 [2024-11-15 11:44:36.078553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.900 [2024-11-15 11:44:36.078583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.900 qpair failed and we were unable to recover it. 00:25:55.900 [2024-11-15 11:44:36.088438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.900 [2024-11-15 11:44:36.088533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.900 [2024-11-15 11:44:36.088566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.900 [2024-11-15 11:44:36.088581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.900 [2024-11-15 11:44:36.088594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.900 [2024-11-15 11:44:36.088625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.900 qpair failed and we were unable to recover it. 00:25:55.900 [2024-11-15 11:44:36.098529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.900 [2024-11-15 11:44:36.098634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.900 [2024-11-15 11:44:36.098660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.900 [2024-11-15 11:44:36.098675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.900 [2024-11-15 11:44:36.098688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.900 [2024-11-15 11:44:36.098718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.900 qpair failed and we were unable to recover it. 00:25:55.900 [2024-11-15 11:44:36.108525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.900 [2024-11-15 11:44:36.108633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.900 [2024-11-15 11:44:36.108658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.900 [2024-11-15 11:44:36.108673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.900 [2024-11-15 11:44:36.108685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.900 [2024-11-15 11:44:36.108715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.900 qpair failed and we were unable to recover it. 00:25:55.900 [2024-11-15 11:44:36.118540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.900 [2024-11-15 11:44:36.118626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.900 [2024-11-15 11:44:36.118652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.900 [2024-11-15 11:44:36.118666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.901 [2024-11-15 11:44:36.118679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.901 [2024-11-15 11:44:36.118709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.901 qpair failed and we were unable to recover it. 00:25:55.901 [2024-11-15 11:44:36.128539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.901 [2024-11-15 11:44:36.128618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.901 [2024-11-15 11:44:36.128643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.901 [2024-11-15 11:44:36.128663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.901 [2024-11-15 11:44:36.128677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.901 [2024-11-15 11:44:36.128708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.901 qpair failed and we were unable to recover it. 00:25:55.901 [2024-11-15 11:44:36.138607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.901 [2024-11-15 11:44:36.138741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.901 [2024-11-15 11:44:36.138766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.901 [2024-11-15 11:44:36.138780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.901 [2024-11-15 11:44:36.138793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.901 [2024-11-15 11:44:36.138823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.901 qpair failed and we were unable to recover it. 00:25:55.901 [2024-11-15 11:44:36.148748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.901 [2024-11-15 11:44:36.148884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.901 [2024-11-15 11:44:36.148911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.901 [2024-11-15 11:44:36.148925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.901 [2024-11-15 11:44:36.148938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.901 [2024-11-15 11:44:36.148967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.901 qpair failed and we were unable to recover it. 00:25:55.901 [2024-11-15 11:44:36.158654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.901 [2024-11-15 11:44:36.158738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.901 [2024-11-15 11:44:36.158764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.901 [2024-11-15 11:44:36.158778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.901 [2024-11-15 11:44:36.158791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.901 [2024-11-15 11:44:36.158822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.901 qpair failed and we were unable to recover it. 00:25:55.901 [2024-11-15 11:44:36.168661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.901 [2024-11-15 11:44:36.168750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.901 [2024-11-15 11:44:36.168776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.901 [2024-11-15 11:44:36.168789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.901 [2024-11-15 11:44:36.168802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.901 [2024-11-15 11:44:36.168832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.901 qpair failed and we were unable to recover it. 00:25:55.901 [2024-11-15 11:44:36.178708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.901 [2024-11-15 11:44:36.178827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.901 [2024-11-15 11:44:36.178853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.901 [2024-11-15 11:44:36.178868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.901 [2024-11-15 11:44:36.178881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.901 [2024-11-15 11:44:36.178914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.901 qpair failed and we were unable to recover it. 00:25:55.901 [2024-11-15 11:44:36.188743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.901 [2024-11-15 11:44:36.188824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.901 [2024-11-15 11:44:36.188853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.901 [2024-11-15 11:44:36.188869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.901 [2024-11-15 11:44:36.188883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.901 [2024-11-15 11:44:36.188915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.901 qpair failed and we were unable to recover it. 00:25:55.901 [2024-11-15 11:44:36.198787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.901 [2024-11-15 11:44:36.198889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.901 [2024-11-15 11:44:36.198916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.901 [2024-11-15 11:44:36.198930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.901 [2024-11-15 11:44:36.198943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.901 [2024-11-15 11:44:36.198973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.901 qpair failed and we were unable to recover it. 00:25:55.901 [2024-11-15 11:44:36.208804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.901 [2024-11-15 11:44:36.208936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.901 [2024-11-15 11:44:36.208964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.901 [2024-11-15 11:44:36.208978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.901 [2024-11-15 11:44:36.208992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.901 [2024-11-15 11:44:36.209021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.901 qpair failed and we were unable to recover it. 00:25:55.901 [2024-11-15 11:44:36.218813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.901 [2024-11-15 11:44:36.218906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.901 [2024-11-15 11:44:36.218932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.901 [2024-11-15 11:44:36.218946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.901 [2024-11-15 11:44:36.218959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.901 [2024-11-15 11:44:36.218990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.901 qpair failed and we were unable to recover it. 00:25:55.901 [2024-11-15 11:44:36.228871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.901 [2024-11-15 11:44:36.228952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.901 [2024-11-15 11:44:36.228978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.901 [2024-11-15 11:44:36.228992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.901 [2024-11-15 11:44:36.229005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.901 [2024-11-15 11:44:36.229035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.901 qpair failed and we were unable to recover it. 00:25:55.901 [2024-11-15 11:44:36.238850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.901 [2024-11-15 11:44:36.238936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.901 [2024-11-15 11:44:36.238961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.901 [2024-11-15 11:44:36.238975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.901 [2024-11-15 11:44:36.238988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.901 [2024-11-15 11:44:36.239018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.901 qpair failed and we were unable to recover it. 00:25:55.901 [2024-11-15 11:44:36.248921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.901 [2024-11-15 11:44:36.249038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.901 [2024-11-15 11:44:36.249066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.901 [2024-11-15 11:44:36.249081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.902 [2024-11-15 11:44:36.249094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.902 [2024-11-15 11:44:36.249124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.902 qpair failed and we were unable to recover it. 00:25:55.902 [2024-11-15 11:44:36.258931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.902 [2024-11-15 11:44:36.259019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.902 [2024-11-15 11:44:36.259045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.902 [2024-11-15 11:44:36.259065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.902 [2024-11-15 11:44:36.259079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.902 [2024-11-15 11:44:36.259110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.902 qpair failed and we were unable to recover it. 00:25:55.902 [2024-11-15 11:44:36.269032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.902 [2024-11-15 11:44:36.269129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.902 [2024-11-15 11:44:36.269154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.902 [2024-11-15 11:44:36.269168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.902 [2024-11-15 11:44:36.269181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.902 [2024-11-15 11:44:36.269210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.902 qpair failed and we were unable to recover it. 00:25:55.902 [2024-11-15 11:44:36.279020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.902 [2024-11-15 11:44:36.279123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.902 [2024-11-15 11:44:36.279148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.902 [2024-11-15 11:44:36.279163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.902 [2024-11-15 11:44:36.279176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.902 [2024-11-15 11:44:36.279205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.902 qpair failed and we were unable to recover it. 00:25:55.902 [2024-11-15 11:44:36.289010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.902 [2024-11-15 11:44:36.289138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.902 [2024-11-15 11:44:36.289164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.902 [2024-11-15 11:44:36.289178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.902 [2024-11-15 11:44:36.289191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.902 [2024-11-15 11:44:36.289220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.902 qpair failed and we were unable to recover it. 00:25:55.902 [2024-11-15 11:44:36.299091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.902 [2024-11-15 11:44:36.299230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.902 [2024-11-15 11:44:36.299255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.902 [2024-11-15 11:44:36.299268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.902 [2024-11-15 11:44:36.299281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.902 [2024-11-15 11:44:36.299324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.902 qpair failed and we were unable to recover it. 00:25:55.902 [2024-11-15 11:44:36.309052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.902 [2024-11-15 11:44:36.309135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.902 [2024-11-15 11:44:36.309160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.902 [2024-11-15 11:44:36.309174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.902 [2024-11-15 11:44:36.309188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.902 [2024-11-15 11:44:36.309217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.902 qpair failed and we were unable to recover it. 00:25:55.902 [2024-11-15 11:44:36.319104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.902 [2024-11-15 11:44:36.319233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.902 [2024-11-15 11:44:36.319260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.902 [2024-11-15 11:44:36.319274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.902 [2024-11-15 11:44:36.319287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:55.902 [2024-11-15 11:44:36.319331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:55.902 qpair failed and we were unable to recover it. 00:25:56.161 [2024-11-15 11:44:36.329145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.161 [2024-11-15 11:44:36.329268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.161 [2024-11-15 11:44:36.329308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.161 [2024-11-15 11:44:36.329338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.161 [2024-11-15 11:44:36.329363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.161 [2024-11-15 11:44:36.329397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.161 qpair failed and we were unable to recover it. 00:25:56.161 [2024-11-15 11:44:36.339180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.161 [2024-11-15 11:44:36.339274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.161 [2024-11-15 11:44:36.339300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.161 [2024-11-15 11:44:36.339330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.161 [2024-11-15 11:44:36.339344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.161 [2024-11-15 11:44:36.339388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.161 qpair failed and we were unable to recover it. 00:25:56.161 [2024-11-15 11:44:36.349171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.161 [2024-11-15 11:44:36.349264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.161 [2024-11-15 11:44:36.349290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.161 [2024-11-15 11:44:36.349312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.161 [2024-11-15 11:44:36.349327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.161 [2024-11-15 11:44:36.349360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.161 qpair failed and we were unable to recover it. 00:25:56.161 [2024-11-15 11:44:36.359220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.161 [2024-11-15 11:44:36.359319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.161 [2024-11-15 11:44:36.359346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.161 [2024-11-15 11:44:36.359360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.161 [2024-11-15 11:44:36.359374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.161 [2024-11-15 11:44:36.359405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.161 qpair failed and we were unable to recover it. 00:25:56.161 [2024-11-15 11:44:36.369217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.161 [2024-11-15 11:44:36.369301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.161 [2024-11-15 11:44:36.369334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.161 [2024-11-15 11:44:36.369348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.161 [2024-11-15 11:44:36.369361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.161 [2024-11-15 11:44:36.369392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.161 qpair failed and we were unable to recover it. 00:25:56.161 [2024-11-15 11:44:36.379293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.161 [2024-11-15 11:44:36.379394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.161 [2024-11-15 11:44:36.379419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.161 [2024-11-15 11:44:36.379433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.161 [2024-11-15 11:44:36.379446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.162 [2024-11-15 11:44:36.379478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.162 qpair failed and we were unable to recover it. 00:25:56.162 [2024-11-15 11:44:36.389283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.162 [2024-11-15 11:44:36.389379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.162 [2024-11-15 11:44:36.389410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.162 [2024-11-15 11:44:36.389426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.162 [2024-11-15 11:44:36.389439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.162 [2024-11-15 11:44:36.389469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.162 qpair failed and we were unable to recover it. 00:25:56.162 [2024-11-15 11:44:36.399355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.162 [2024-11-15 11:44:36.399440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.162 [2024-11-15 11:44:36.399465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.162 [2024-11-15 11:44:36.399479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.162 [2024-11-15 11:44:36.399493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.162 [2024-11-15 11:44:36.399524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.162 qpair failed and we were unable to recover it. 00:25:56.162 [2024-11-15 11:44:36.409357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.162 [2024-11-15 11:44:36.409442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.162 [2024-11-15 11:44:36.409471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.162 [2024-11-15 11:44:36.409486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.162 [2024-11-15 11:44:36.409499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.162 [2024-11-15 11:44:36.409529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.162 qpair failed and we were unable to recover it. 00:25:56.162 [2024-11-15 11:44:36.419400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.162 [2024-11-15 11:44:36.419491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.162 [2024-11-15 11:44:36.419517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.162 [2024-11-15 11:44:36.419531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.162 [2024-11-15 11:44:36.419544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.162 [2024-11-15 11:44:36.419575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.162 qpair failed and we were unable to recover it. 00:25:56.162 [2024-11-15 11:44:36.429435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.162 [2024-11-15 11:44:36.429522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.162 [2024-11-15 11:44:36.429551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.162 [2024-11-15 11:44:36.429568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.162 [2024-11-15 11:44:36.429586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.162 [2024-11-15 11:44:36.429630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.162 qpair failed and we were unable to recover it. 00:25:56.162 [2024-11-15 11:44:36.439479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.162 [2024-11-15 11:44:36.439560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.162 [2024-11-15 11:44:36.439584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.162 [2024-11-15 11:44:36.439598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.162 [2024-11-15 11:44:36.439610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.162 [2024-11-15 11:44:36.439651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.162 qpair failed and we were unable to recover it. 00:25:56.162 [2024-11-15 11:44:36.449476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.162 [2024-11-15 11:44:36.449558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.162 [2024-11-15 11:44:36.449584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.162 [2024-11-15 11:44:36.449598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.162 [2024-11-15 11:44:36.449611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.162 [2024-11-15 11:44:36.449641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.162 qpair failed and we were unable to recover it. 00:25:56.162 [2024-11-15 11:44:36.459496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.162 [2024-11-15 11:44:36.459617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.162 [2024-11-15 11:44:36.459643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.162 [2024-11-15 11:44:36.459657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.162 [2024-11-15 11:44:36.459670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.162 [2024-11-15 11:44:36.459701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.162 qpair failed and we were unable to recover it. 00:25:56.162 [2024-11-15 11:44:36.469566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.162 [2024-11-15 11:44:36.469652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.162 [2024-11-15 11:44:36.469677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.162 [2024-11-15 11:44:36.469691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.162 [2024-11-15 11:44:36.469703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.162 [2024-11-15 11:44:36.469734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.162 qpair failed and we were unable to recover it. 00:25:56.162 [2024-11-15 11:44:36.479606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.162 [2024-11-15 11:44:36.479690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.162 [2024-11-15 11:44:36.479716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.162 [2024-11-15 11:44:36.479730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.162 [2024-11-15 11:44:36.479743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.162 [2024-11-15 11:44:36.479773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.162 qpair failed and we were unable to recover it. 00:25:56.162 [2024-11-15 11:44:36.489591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.162 [2024-11-15 11:44:36.489673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.162 [2024-11-15 11:44:36.489699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.162 [2024-11-15 11:44:36.489713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.162 [2024-11-15 11:44:36.489726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.162 [2024-11-15 11:44:36.489757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.162 qpair failed and we were unable to recover it. 00:25:56.162 [2024-11-15 11:44:36.499618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.162 [2024-11-15 11:44:36.499710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.162 [2024-11-15 11:44:36.499735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.162 [2024-11-15 11:44:36.499750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.162 [2024-11-15 11:44:36.499763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.162 [2024-11-15 11:44:36.499792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.162 qpair failed and we were unable to recover it. 00:25:56.162 [2024-11-15 11:44:36.509633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.162 [2024-11-15 11:44:36.509753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.162 [2024-11-15 11:44:36.509778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.163 [2024-11-15 11:44:36.509792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.163 [2024-11-15 11:44:36.509805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.163 [2024-11-15 11:44:36.509835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.163 qpair failed and we were unable to recover it. 00:25:56.163 [2024-11-15 11:44:36.519692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.163 [2024-11-15 11:44:36.519793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.163 [2024-11-15 11:44:36.519833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.163 [2024-11-15 11:44:36.519861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.163 [2024-11-15 11:44:36.519886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.163 [2024-11-15 11:44:36.519931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.163 qpair failed and we were unable to recover it. 00:25:56.163 [2024-11-15 11:44:36.529703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.163 [2024-11-15 11:44:36.529786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.163 [2024-11-15 11:44:36.529814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.163 [2024-11-15 11:44:36.529829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.163 [2024-11-15 11:44:36.529842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.163 [2024-11-15 11:44:36.529874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.163 qpair failed and we were unable to recover it. 00:25:56.163 [2024-11-15 11:44:36.539827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.163 [2024-11-15 11:44:36.539918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.163 [2024-11-15 11:44:36.539944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.163 [2024-11-15 11:44:36.539958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.163 [2024-11-15 11:44:36.539972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.163 [2024-11-15 11:44:36.540002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.163 qpair failed and we were unable to recover it. 00:25:56.163 [2024-11-15 11:44:36.549827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.163 [2024-11-15 11:44:36.549928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.163 [2024-11-15 11:44:36.549954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.163 [2024-11-15 11:44:36.549969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.163 [2024-11-15 11:44:36.549981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.163 [2024-11-15 11:44:36.550011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.163 qpair failed and we were unable to recover it. 00:25:56.163 [2024-11-15 11:44:36.559786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.163 [2024-11-15 11:44:36.559865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.163 [2024-11-15 11:44:36.559891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.163 [2024-11-15 11:44:36.559906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.163 [2024-11-15 11:44:36.559924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.163 [2024-11-15 11:44:36.559957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.163 qpair failed and we were unable to recover it. 00:25:56.163 [2024-11-15 11:44:36.569788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.163 [2024-11-15 11:44:36.569882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.163 [2024-11-15 11:44:36.569908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.163 [2024-11-15 11:44:36.569922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.163 [2024-11-15 11:44:36.569935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.163 [2024-11-15 11:44:36.569966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.163 qpair failed and we were unable to recover it. 00:25:56.163 [2024-11-15 11:44:36.579911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.163 [2024-11-15 11:44:36.580019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.163 [2024-11-15 11:44:36.580045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.163 [2024-11-15 11:44:36.580059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.163 [2024-11-15 11:44:36.580072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.163 [2024-11-15 11:44:36.580102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.163 qpair failed and we were unable to recover it. 00:25:56.421 [2024-11-15 11:44:36.589886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.421 [2024-11-15 11:44:36.589967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.421 [2024-11-15 11:44:36.589994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.421 [2024-11-15 11:44:36.590009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.421 [2024-11-15 11:44:36.590022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fdd38000b90 00:25:56.422 [2024-11-15 11:44:36.590053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.422 qpair failed and we were unable to recover it. 00:25:56.422 [2024-11-15 11:44:36.590195] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:25:56.422 A controller has encountered a failure and is being reset. 00:25:56.422 Controller properly reset. 00:25:56.422 Initializing NVMe Controllers 00:25:56.422 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:56.422 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:56.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:56.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:56.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:56.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:56.422 Initialization complete. Launching workers. 00:25:56.422 Starting thread on core 1 00:25:56.422 Starting thread on core 2 00:25:56.422 Starting thread on core 3 00:25:56.422 Starting thread on core 0 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:56.422 00:25:56.422 real 0m10.705s 00:25:56.422 user 0m19.209s 00:25:56.422 sys 0m4.852s 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.422 ************************************ 00:25:56.422 END TEST nvmf_target_disconnect_tc2 00:25:56.422 ************************************ 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:56.422 rmmod nvme_tcp 00:25:56.422 rmmod nvme_fabrics 00:25:56.422 rmmod nvme_keyring 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3040484 ']' 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3040484 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3040484 ']' 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3040484 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3040484 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3040484' 00:25:56.422 killing process with pid 3040484 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3040484 00:25:56.422 11:44:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3040484 00:25:56.682 11:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:56.682 11:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:56.682 11:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:56.682 11:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:25:56.682 11:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:25:56.682 11:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:56.682 11:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:25:56.682 11:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:56.682 11:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:56.682 11:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.682 11:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.682 11:44:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.221 11:44:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:59.221 00:25:59.221 real 0m15.788s 00:25:59.221 user 0m45.339s 00:25:59.221 sys 0m6.995s 00:25:59.221 11:44:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:59.221 11:44:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:59.221 ************************************ 00:25:59.221 END TEST nvmf_target_disconnect 00:25:59.221 ************************************ 00:25:59.221 11:44:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:59.221 00:25:59.221 real 5m7.769s 00:25:59.221 user 10m54.705s 00:25:59.221 sys 1m13.284s 00:25:59.221 11:44:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:59.221 11:44:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.221 ************************************ 00:25:59.221 END TEST nvmf_host 00:25:59.221 ************************************ 00:25:59.221 11:44:39 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:25:59.221 11:44:39 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:25:59.221 11:44:39 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:25:59.221 11:44:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:59.221 11:44:39 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:59.221 11:44:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:59.221 ************************************ 00:25:59.221 START TEST nvmf_target_core_interrupt_mode 00:25:59.221 ************************************ 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:25:59.221 * Looking for test storage... 00:25:59.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:59.221 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:59.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.221 --rc genhtml_branch_coverage=1 00:25:59.221 --rc genhtml_function_coverage=1 00:25:59.221 --rc genhtml_legend=1 00:25:59.221 --rc geninfo_all_blocks=1 00:25:59.221 --rc geninfo_unexecuted_blocks=1 00:25:59.222 00:25:59.222 ' 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:59.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.222 --rc genhtml_branch_coverage=1 00:25:59.222 --rc genhtml_function_coverage=1 00:25:59.222 --rc genhtml_legend=1 00:25:59.222 --rc geninfo_all_blocks=1 00:25:59.222 --rc geninfo_unexecuted_blocks=1 00:25:59.222 00:25:59.222 ' 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:59.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.222 --rc genhtml_branch_coverage=1 00:25:59.222 --rc genhtml_function_coverage=1 00:25:59.222 --rc genhtml_legend=1 00:25:59.222 --rc geninfo_all_blocks=1 00:25:59.222 --rc geninfo_unexecuted_blocks=1 00:25:59.222 00:25:59.222 ' 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:59.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.222 --rc genhtml_branch_coverage=1 00:25:59.222 --rc genhtml_function_coverage=1 00:25:59.222 --rc genhtml_legend=1 00:25:59.222 --rc geninfo_all_blocks=1 00:25:59.222 --rc geninfo_unexecuted_blocks=1 00:25:59.222 00:25:59.222 ' 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:59.222 ************************************ 00:25:59.222 START TEST nvmf_abort 00:25:59.222 ************************************ 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:25:59.222 * Looking for test storage... 00:25:59.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:25:59.222 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:59.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.223 --rc genhtml_branch_coverage=1 00:25:59.223 --rc genhtml_function_coverage=1 00:25:59.223 --rc genhtml_legend=1 00:25:59.223 --rc geninfo_all_blocks=1 00:25:59.223 --rc geninfo_unexecuted_blocks=1 00:25:59.223 00:25:59.223 ' 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:59.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.223 --rc genhtml_branch_coverage=1 00:25:59.223 --rc genhtml_function_coverage=1 00:25:59.223 --rc genhtml_legend=1 00:25:59.223 --rc geninfo_all_blocks=1 00:25:59.223 --rc geninfo_unexecuted_blocks=1 00:25:59.223 00:25:59.223 ' 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:59.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.223 --rc genhtml_branch_coverage=1 00:25:59.223 --rc genhtml_function_coverage=1 00:25:59.223 --rc genhtml_legend=1 00:25:59.223 --rc geninfo_all_blocks=1 00:25:59.223 --rc geninfo_unexecuted_blocks=1 00:25:59.223 00:25:59.223 ' 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:59.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.223 --rc genhtml_branch_coverage=1 00:25:59.223 --rc genhtml_function_coverage=1 00:25:59.223 --rc genhtml_legend=1 00:25:59.223 --rc geninfo_all_blocks=1 00:25:59.223 --rc geninfo_unexecuted_blocks=1 00:25:59.223 00:25:59.223 ' 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:59.223 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:59.224 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:59.224 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:59.224 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:59.224 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:25:59.224 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:25:59.224 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:59.224 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.224 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:59.224 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:59.224 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:59.224 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.224 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.224 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.224 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:59.224 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:59.224 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:25:59.224 11:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:01.754 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:01.754 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:01.754 Found net devices under 0000:09:00.0: cvl_0_0 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:01.754 Found net devices under 0000:09:00.1: cvl_0_1 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:01.754 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:01.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:26:01.755 00:26:01.755 --- 10.0.0.2 ping statistics --- 00:26:01.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.755 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:01.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:26:01.755 00:26:01.755 --- 10.0.0.1 ping statistics --- 00:26:01.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.755 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3043293 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3043293 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3043293 ']' 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:01.755 11:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.755 [2024-11-15 11:44:41.914379] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:01.755 [2024-11-15 11:44:41.915442] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:26:01.755 [2024-11-15 11:44:41.915497] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.755 [2024-11-15 11:44:41.986671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:01.755 [2024-11-15 11:44:42.045941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.755 [2024-11-15 11:44:42.045989] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.755 [2024-11-15 11:44:42.046017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.755 [2024-11-15 11:44:42.046028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.755 [2024-11-15 11:44:42.046037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.755 [2024-11-15 11:44:42.047424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:01.755 [2024-11-15 11:44:42.047487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:01.755 [2024-11-15 11:44:42.047491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.755 [2024-11-15 11:44:42.133018] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:01.755 [2024-11-15 11:44:42.133225] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:01.755 [2024-11-15 11:44:42.133227] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:01.755 [2024-11-15 11:44:42.133517] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:01.755 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:01.755 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:26:01.755 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:01.755 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:01.755 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.755 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.755 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:26:01.755 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.755 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:02.014 [2024-11-15 11:44:42.180193] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:02.014 Malloc0 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:02.014 Delay0 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:02.014 [2024-11-15 11:44:42.256353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.014 11:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:26:02.014 [2024-11-15 11:44:42.401470] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:04.542 Initializing NVMe Controllers 00:26:04.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:04.542 controller IO queue size 128 less than required 00:26:04.543 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:26:04.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:04.543 Initialization complete. Launching workers. 00:26:04.543 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29398 00:26:04.543 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29455, failed to submit 66 00:26:04.543 success 29398, unsuccessful 57, failed 0 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:04.543 rmmod nvme_tcp 00:26:04.543 rmmod nvme_fabrics 00:26:04.543 rmmod nvme_keyring 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3043293 ']' 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3043293 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3043293 ']' 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3043293 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3043293 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3043293' 00:26:04.543 killing process with pid 3043293 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3043293 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3043293 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.543 11:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.079 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:07.079 00:26:07.079 real 0m7.599s 00:26:07.079 user 0m9.837s 00:26:07.079 sys 0m3.012s 00:26:07.079 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:07.079 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:07.079 ************************************ 00:26:07.079 END TEST nvmf_abort 00:26:07.079 ************************************ 00:26:07.079 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:07.079 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:07.079 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:07.079 11:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:07.079 ************************************ 00:26:07.079 START TEST nvmf_ns_hotplug_stress 00:26:07.079 ************************************ 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:07.080 * Looking for test storage... 00:26:07.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:07.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.080 --rc genhtml_branch_coverage=1 00:26:07.080 --rc genhtml_function_coverage=1 00:26:07.080 --rc genhtml_legend=1 00:26:07.080 --rc geninfo_all_blocks=1 00:26:07.080 --rc geninfo_unexecuted_blocks=1 00:26:07.080 00:26:07.080 ' 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:07.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.080 --rc genhtml_branch_coverage=1 00:26:07.080 --rc genhtml_function_coverage=1 00:26:07.080 --rc genhtml_legend=1 00:26:07.080 --rc geninfo_all_blocks=1 00:26:07.080 --rc geninfo_unexecuted_blocks=1 00:26:07.080 00:26:07.080 ' 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:07.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.080 --rc genhtml_branch_coverage=1 00:26:07.080 --rc genhtml_function_coverage=1 00:26:07.080 --rc genhtml_legend=1 00:26:07.080 --rc geninfo_all_blocks=1 00:26:07.080 --rc geninfo_unexecuted_blocks=1 00:26:07.080 00:26:07.080 ' 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:07.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.080 --rc genhtml_branch_coverage=1 00:26:07.080 --rc genhtml_function_coverage=1 00:26:07.080 --rc genhtml_legend=1 00:26:07.080 --rc geninfo_all_blocks=1 00:26:07.080 --rc geninfo_unexecuted_blocks=1 00:26:07.080 00:26:07.080 ' 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.080 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:26:07.081 11:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:08.996 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:08.997 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:08.997 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:08.997 Found net devices under 0000:09:00.0: cvl_0_0 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:08.997 Found net devices under 0000:09:00.1: cvl_0_1 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:08.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:08.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:26:08.997 00:26:08.997 --- 10.0.0.2 ping statistics --- 00:26:08.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.997 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:08.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:08.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:26:08.997 00:26:08.997 --- 10.0.0.1 ping statistics --- 00:26:08.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.997 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:08.997 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:08.998 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:08.998 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3045593 00:26:08.998 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:08.998 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3045593 00:26:08.998 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3045593 ']' 00:26:08.998 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.998 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:08.998 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.998 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:08.998 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:09.256 [2024-11-15 11:44:49.459300] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:09.256 [2024-11-15 11:44:49.460394] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:26:09.256 [2024-11-15 11:44:49.460453] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.256 [2024-11-15 11:44:49.533873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:09.256 [2024-11-15 11:44:49.598814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.256 [2024-11-15 11:44:49.598882] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.256 [2024-11-15 11:44:49.598911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.256 [2024-11-15 11:44:49.598922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.256 [2024-11-15 11:44:49.598932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.256 [2024-11-15 11:44:49.600584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:09.256 [2024-11-15 11:44:49.600646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.256 [2024-11-15 11:44:49.600642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:09.514 [2024-11-15 11:44:49.700254] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:09.514 [2024-11-15 11:44:49.700503] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:09.514 [2024-11-15 11:44:49.700506] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:09.514 [2024-11-15 11:44:49.700797] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:09.514 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.514 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:26:09.514 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:09.514 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:09.514 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:09.514 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.514 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:26:09.514 11:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:09.773 [2024-11-15 11:44:49.993377] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.773 11:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:10.032 11:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:10.291 [2024-11-15 11:44:50.649691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.291 11:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:10.550 11:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:26:11.116 Malloc0 00:26:11.116 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:11.116 Delay0 00:26:11.116 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:11.376 11:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:26:11.635 NULL1 00:26:11.635 11:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:26:12.201 11:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3045934 00:26:12.201 11:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:26:12.201 11:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:12.201 11:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:13.135 Read completed with error (sct=0, sc=11) 00:26:13.135 11:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:13.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:13.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:13.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:13.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:13.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:13.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:13.392 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:13.392 11:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:26:13.392 11:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:26:13.649 true 00:26:13.649 11:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:13.649 11:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:14.581 11:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:14.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:14.839 11:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:26:14.839 11:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:26:15.096 true 00:26:15.096 11:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:15.096 11:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:15.353 11:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:15.611 11:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:26:15.611 11:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:26:15.868 true 00:26:15.868 11:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:15.868 11:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:16.800 11:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:16.800 11:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:26:16.800 11:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:26:17.057 true 00:26:17.057 11:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:17.057 11:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:17.315 11:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:17.572 11:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:26:17.572 11:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:26:17.830 true 00:26:17.830 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:17.830 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:18.395 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:18.395 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:26:18.395 11:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:26:18.653 true 00:26:18.653 11:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:18.653 11:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:19.587 11:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:20.152 11:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:26:20.152 11:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:26:20.152 true 00:26:20.152 11:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:20.152 11:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:20.718 11:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:20.976 11:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:26:20.976 11:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:26:21.233 true 00:26:21.233 11:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:21.233 11:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:21.491 11:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:21.748 11:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:26:21.748 11:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:26:22.005 true 00:26:22.005 11:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:22.005 11:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:22.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:22.938 11:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:22.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:22.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:23.196 11:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:26:23.196 11:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:26:23.453 true 00:26:23.453 11:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:23.454 11:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:23.712 11:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:23.969 11:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:26:23.969 11:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:26:24.227 true 00:26:24.227 11:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:24.227 11:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:25.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:25.159 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:25.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:25.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:25.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:25.417 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:26:25.417 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:26:25.674 true 00:26:25.674 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:25.674 11:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:25.931 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:26.189 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:26:26.189 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:26:26.445 true 00:26:26.445 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:26.445 11:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:27.374 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:27.631 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:26:27.631 11:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:26:27.889 true 00:26:27.889 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:27.889 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:28.147 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:28.404 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:26:28.404 11:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:26:28.661 true 00:26:28.661 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:28.661 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:28.918 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:29.482 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:26:29.482 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:26:29.482 true 00:26:29.482 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:29.482 11:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:30.411 11:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:30.667 11:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:26:30.667 11:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:26:30.925 true 00:26:30.925 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:30.925 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:31.182 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:31.440 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:26:31.440 11:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:26:31.696 true 00:26:31.696 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:31.696 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:31.952 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:32.209 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:26:32.209 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:26:32.466 true 00:26:32.466 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:32.466 11:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:33.835 11:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:33.835 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:26:33.835 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:26:34.093 true 00:26:34.093 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:34.093 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:34.351 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:34.608 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:26:34.608 11:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:26:34.866 true 00:26:34.866 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:34.866 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:35.124 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:35.381 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:26:35.381 11:45:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:26:35.639 true 00:26:35.639 11:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:35.639 11:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:36.645 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:36.903 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:26:36.903 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:26:37.160 true 00:26:37.418 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:37.418 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:37.676 11:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:37.934 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:26:37.934 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:26:38.191 true 00:26:38.191 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:38.191 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:38.449 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:38.707 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:26:38.707 11:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:26:38.964 true 00:26:38.964 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:38.964 11:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:39.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:39.895 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:39.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:40.153 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:26:40.153 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:26:40.410 true 00:26:40.411 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:40.411 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:40.668 11:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:40.926 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:26:40.926 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:26:41.183 true 00:26:41.183 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:41.183 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:41.440 11:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:41.698 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:26:41.698 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:26:41.956 true 00:26:41.956 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:41.956 11:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:42.889 Initializing NVMe Controllers 00:26:42.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:42.889 Controller IO queue size 128, less than required. 00:26:42.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:42.889 Controller IO queue size 128, less than required. 00:26:42.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:42.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:42.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:42.889 Initialization complete. Launching workers. 00:26:42.889 ======================================================== 00:26:42.889 Latency(us) 00:26:42.889 Device Information : IOPS MiB/s Average min max 00:26:42.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 638.37 0.31 89106.90 3205.72 1014515.84 00:26:42.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8972.40 4.38 14266.08 1840.50 368104.92 00:26:42.889 ======================================================== 00:26:42.889 Total : 9610.77 4.69 19237.16 1840.50 1014515.84 00:26:42.889 00:26:42.889 11:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:43.147 11:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:26:43.147 11:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:26:43.405 true 00:26:43.405 11:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3045934 00:26:43.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3045934) - No such process 00:26:43.405 11:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3045934 00:26:43.405 11:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:43.663 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:43.920 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:26:43.920 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:26:43.920 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:26:43.920 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:43.920 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:26:44.178 null0 00:26:44.436 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:44.436 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:44.436 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:26:44.694 null1 00:26:44.694 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:44.694 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:44.694 11:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:26:44.952 null2 00:26:44.952 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:44.952 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:44.952 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:26:45.211 null3 00:26:45.211 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:45.211 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:45.211 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:26:45.469 null4 00:26:45.469 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:45.469 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:45.469 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:26:45.727 null5 00:26:45.727 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:45.727 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:45.727 11:45:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:26:45.985 null6 00:26:45.985 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:45.985 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:45.985 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:26:46.243 null7 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:46.243 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:46.244 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:26:46.244 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:46.244 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:46.244 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:26:46.244 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:46.244 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.244 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:46.244 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:46.244 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:26:46.244 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:46.244 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:46.244 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:26:46.244 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:46.244 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3050659 3050661 3050665 3050667 3050670 3050674 3050677 3050680 00:26:46.244 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.244 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:46.501 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:46.501 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:46.501 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:46.501 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:46.501 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:46.502 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:46.502 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:46.502 11:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.760 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:47.018 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:47.018 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:47.018 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:47.018 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:47.018 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:47.018 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:47.018 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:47.018 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:47.276 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.277 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.277 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.534 11:45:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:47.792 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:47.792 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:47.792 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:47.792 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:47.792 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:47.792 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:47.792 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:47.792 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.050 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:48.308 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:48.308 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:48.308 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:48.308 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:48.308 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:48.308 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:48.308 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:48.308 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.566 11:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:48.824 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:48.824 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:48.824 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:48.824 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:48.824 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:48.824 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:48.824 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:48.824 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:49.391 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.391 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.391 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:49.391 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.391 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.391 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:49.391 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.391 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.392 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:49.392 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.392 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.392 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.392 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.392 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:49.392 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:49.392 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.392 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.392 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:49.392 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.392 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.392 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.392 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.392 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:49.392 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:49.650 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:49.650 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:49.650 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:49.650 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:49.650 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:49.650 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:49.650 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:49.650 11:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.909 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:50.168 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:50.168 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:50.168 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:50.168 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:50.168 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:50.168 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:50.168 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:50.168 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.426 11:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:50.685 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:50.685 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:50.685 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:50.685 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:50.685 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:50.685 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:50.685 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:50.685 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:50.943 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:51.201 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:51.201 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:51.201 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:51.201 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:51.201 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:51.458 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:51.458 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:51.458 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:51.716 11:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:51.973 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:51.973 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:51.973 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:51.973 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:51.973 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:51.973 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:51.973 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:51.973 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:52.264 rmmod nvme_tcp 00:26:52.264 rmmod nvme_fabrics 00:26:52.264 rmmod nvme_keyring 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3045593 ']' 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3045593 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3045593 ']' 00:26:52.264 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3045593 00:26:52.265 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:26:52.265 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.265 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3045593 00:26:52.265 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:52.265 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:52.265 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3045593' 00:26:52.265 killing process with pid 3045593 00:26:52.265 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3045593 00:26:52.265 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3045593 00:26:52.521 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:52.521 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:52.521 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:52.521 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:26:52.521 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:26:52.521 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:52.521 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:26:52.521 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:52.521 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:52.521 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.521 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.521 11:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.049 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:55.049 00:26:55.049 real 0m47.923s 00:26:55.049 user 3m20.924s 00:26:55.049 sys 0m21.960s 00:26:55.049 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:55.049 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:55.049 ************************************ 00:26:55.049 END TEST nvmf_ns_hotplug_stress 00:26:55.049 ************************************ 00:26:55.049 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:26:55.049 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:55.049 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:55.049 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:55.049 ************************************ 00:26:55.049 START TEST nvmf_delete_subsystem 00:26:55.049 ************************************ 00:26:55.049 11:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:26:55.049 * Looking for test storage... 00:26:55.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:55.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.049 --rc genhtml_branch_coverage=1 00:26:55.049 --rc genhtml_function_coverage=1 00:26:55.049 --rc genhtml_legend=1 00:26:55.049 --rc geninfo_all_blocks=1 00:26:55.049 --rc geninfo_unexecuted_blocks=1 00:26:55.049 00:26:55.049 ' 00:26:55.049 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:55.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.049 --rc genhtml_branch_coverage=1 00:26:55.049 --rc genhtml_function_coverage=1 00:26:55.049 --rc genhtml_legend=1 00:26:55.050 --rc geninfo_all_blocks=1 00:26:55.050 --rc geninfo_unexecuted_blocks=1 00:26:55.050 00:26:55.050 ' 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:55.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.050 --rc genhtml_branch_coverage=1 00:26:55.050 --rc genhtml_function_coverage=1 00:26:55.050 --rc genhtml_legend=1 00:26:55.050 --rc geninfo_all_blocks=1 00:26:55.050 --rc geninfo_unexecuted_blocks=1 00:26:55.050 00:26:55.050 ' 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:55.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.050 --rc genhtml_branch_coverage=1 00:26:55.050 --rc genhtml_function_coverage=1 00:26:55.050 --rc genhtml_legend=1 00:26:55.050 --rc geninfo_all_blocks=1 00:26:55.050 --rc geninfo_unexecuted_blocks=1 00:26:55.050 00:26:55.050 ' 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:26:55.050 11:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.954 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.955 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.955 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.955 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:56.955 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:56.955 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:56.955 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:56.955 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:56.955 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:56.955 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.955 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:56.955 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:56.955 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:56.955 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:57.213 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.213 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.213 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:57.213 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:57.213 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:57.214 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:57.214 Found net devices under 0000:09:00.0: cvl_0_0 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:57.214 Found net devices under 0000:09:00.1: cvl_0_1 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:57.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:57.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:26:57.214 00:26:57.214 --- 10.0.0.2 ping statistics --- 00:26:57.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.214 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:57.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:57.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:26:57.214 00:26:57.214 --- 10.0.0.1 ping statistics --- 00:26:57.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.214 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3053454 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3053454 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3053454 ']' 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:57.214 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:57.214 [2024-11-15 11:45:37.597030] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:57.214 [2024-11-15 11:45:37.598090] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:26:57.214 [2024-11-15 11:45:37.598143] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:57.473 [2024-11-15 11:45:37.670117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:57.473 [2024-11-15 11:45:37.726242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:57.473 [2024-11-15 11:45:37.726295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:57.473 [2024-11-15 11:45:37.726333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:57.473 [2024-11-15 11:45:37.726345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:57.473 [2024-11-15 11:45:37.726354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:57.473 [2024-11-15 11:45:37.727809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.473 [2024-11-15 11:45:37.727815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.473 [2024-11-15 11:45:37.817920] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:57.473 [2024-11-15 11:45:37.817972] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:57.473 [2024-11-15 11:45:37.818179] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:57.473 [2024-11-15 11:45:37.868537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:57.473 [2024-11-15 11:45:37.888732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.473 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:57.731 NULL1 00:26:57.731 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.731 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:57.731 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.731 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:57.731 Delay0 00:26:57.731 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.731 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:57.731 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.731 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:57.731 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.731 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3053598 00:26:57.731 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:57.731 11:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:26:57.732 [2024-11-15 11:45:37.964125] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:59.628 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:59.628 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.629 11:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:59.885 Write completed with error (sct=0, sc=8) 00:26:59.885 Read completed with error (sct=0, sc=8) 00:26:59.885 Read completed with error (sct=0, sc=8) 00:26:59.885 starting I/O failed: -6 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 [2024-11-15 11:45:40.128390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f978c00d680 is same with the state(6) to be set 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 Write completed with error (sct=0, sc=8) 00:26:59.886 Read completed with error (sct=0, sc=8) 00:26:59.886 starting I/O failed: -6 00:26:59.886 starting I/O failed: -6 00:26:59.886 starting I/O failed: -6 00:26:59.886 starting I/O failed: -6 00:26:59.886 starting I/O failed: -6 00:27:00.816 [2024-11-15 11:45:41.103637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdc9a0 is same with the state(6) to be set 00:27:00.816 Read completed with error (sct=0, sc=8) 00:27:00.816 Read completed with error (sct=0, sc=8) 00:27:00.816 Write completed with error (sct=0, sc=8) 00:27:00.816 Write completed with error (sct=0, sc=8) 00:27:00.816 Write completed with error (sct=0, sc=8) 00:27:00.816 Write completed with error (sct=0, sc=8) 00:27:00.816 Read completed with error (sct=0, sc=8) 00:27:00.816 Read completed with error (sct=0, sc=8) 00:27:00.816 Read completed with error (sct=0, sc=8) 00:27:00.816 Read completed with error (sct=0, sc=8) 00:27:00.816 Read completed with error (sct=0, sc=8) 00:27:00.816 Read completed with error (sct=0, sc=8) 00:27:00.816 [2024-11-15 11:45:41.127799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f978c00d350 is same with the state(6) to be set 00:27:00.816 Write completed with error (sct=0, sc=8) 00:27:00.816 Write completed with error (sct=0, sc=8) 00:27:00.816 Write completed with error (sct=0, sc=8) 00:27:00.816 Write completed with error (sct=0, sc=8) 00:27:00.816 Write completed with error (sct=0, sc=8) 00:27:00.816 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 [2024-11-15 11:45:41.129386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb2c0 is same with the state(6) to be set 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 [2024-11-15 11:45:41.129630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb860 is same with the state(6) to be set 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Write completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 Read completed with error (sct=0, sc=8) 00:27:00.817 [2024-11-15 11:45:41.131331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb4a0 is same with the state(6) to be set 00:27:00.817 Initializing NVMe Controllers 00:27:00.817 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:00.817 Controller IO queue size 128, less than required. 00:27:00.817 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:00.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:00.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:00.817 Initialization complete. Launching workers. 00:27:00.817 ======================================================== 00:27:00.817 Latency(us) 00:27:00.817 Device Information : IOPS MiB/s Average min max 00:27:00.817 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 190.98 0.09 955395.84 798.84 1012350.04 00:27:00.817 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 146.34 0.07 915015.39 469.51 1012279.03 00:27:00.817 ======================================================== 00:27:00.817 Total : 337.32 0.16 937877.85 469.51 1012350.04 00:27:00.817 00:27:00.817 [2024-11-15 11:45:41.132202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfdc9a0 (9): Bad file descriptor 00:27:00.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:00.817 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.817 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:27:00.817 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3053598 00:27:00.817 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3053598 00:27:01.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3053598) - No such process 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3053598 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3053598 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3053598 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:01.384 [2024-11-15 11:45:41.652680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3054000 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3054000 00:27:01.384 11:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:01.384 [2024-11-15 11:45:41.711259] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:01.949 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:01.949 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3054000 00:27:01.949 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:02.513 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:02.513 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3054000 00:27:02.513 11:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:02.773 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:02.773 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3054000 00:27:02.773 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:03.400 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:03.400 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3054000 00:27:03.400 11:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:03.995 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:03.995 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3054000 00:27:03.995 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:04.563 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:04.563 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3054000 00:27:04.563 11:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:04.563 Initializing NVMe Controllers 00:27:04.563 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:04.563 Controller IO queue size 128, less than required. 00:27:04.563 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:04.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:04.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:04.563 Initialization complete. Launching workers. 00:27:04.563 ======================================================== 00:27:04.563 Latency(us) 00:27:04.563 Device Information : IOPS MiB/s Average min max 00:27:04.563 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004354.31 1000189.66 1041982.70 00:27:04.563 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005248.72 1000177.00 1011831.99 00:27:04.563 ======================================================== 00:27:04.563 Total : 256.00 0.12 1004801.52 1000177.00 1041982.70 00:27:04.563 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3054000 00:27:04.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3054000) - No such process 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3054000 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:04.821 rmmod nvme_tcp 00:27:04.821 rmmod nvme_fabrics 00:27:04.821 rmmod nvme_keyring 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3053454 ']' 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3053454 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3053454 ']' 00:27:04.821 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3053454 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3053454 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3053454' 00:27:05.208 killing process with pid 3053454 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3053454 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3053454 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.208 11:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.119 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:07.119 00:27:07.119 real 0m12.557s 00:27:07.119 user 0m24.605s 00:27:07.119 sys 0m3.979s 00:27:07.119 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:07.119 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:07.119 ************************************ 00:27:07.119 END TEST nvmf_delete_subsystem 00:27:07.119 ************************************ 00:27:07.379 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:07.380 ************************************ 00:27:07.380 START TEST nvmf_host_management 00:27:07.380 ************************************ 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:07.380 * Looking for test storage... 00:27:07.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:07.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.380 --rc genhtml_branch_coverage=1 00:27:07.380 --rc genhtml_function_coverage=1 00:27:07.380 --rc genhtml_legend=1 00:27:07.380 --rc geninfo_all_blocks=1 00:27:07.380 --rc geninfo_unexecuted_blocks=1 00:27:07.380 00:27:07.380 ' 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:07.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.380 --rc genhtml_branch_coverage=1 00:27:07.380 --rc genhtml_function_coverage=1 00:27:07.380 --rc genhtml_legend=1 00:27:07.380 --rc geninfo_all_blocks=1 00:27:07.380 --rc geninfo_unexecuted_blocks=1 00:27:07.380 00:27:07.380 ' 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:07.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.380 --rc genhtml_branch_coverage=1 00:27:07.380 --rc genhtml_function_coverage=1 00:27:07.380 --rc genhtml_legend=1 00:27:07.380 --rc geninfo_all_blocks=1 00:27:07.380 --rc geninfo_unexecuted_blocks=1 00:27:07.380 00:27:07.380 ' 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:07.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.380 --rc genhtml_branch_coverage=1 00:27:07.380 --rc genhtml_function_coverage=1 00:27:07.380 --rc genhtml_legend=1 00:27:07.380 --rc geninfo_all_blocks=1 00:27:07.380 --rc geninfo_unexecuted_blocks=1 00:27:07.380 00:27:07.380 ' 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.380 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:27:07.381 11:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:09.913 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.913 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:09.914 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:09.914 Found net devices under 0000:09:00.0: cvl_0_0 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:09.914 Found net devices under 0000:09:00.1: cvl_0_1 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:09.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:27:09.914 00:27:09.914 --- 10.0.0.2 ping statistics --- 00:27:09.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.914 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:27:09.914 00:27:09.914 --- 10.0.0.1 ping statistics --- 00:27:09.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.914 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:09.914 11:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:09.914 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3056348 00:27:09.914 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:27:09.914 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3056348 00:27:09.914 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3056348 ']' 00:27:09.914 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.914 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.915 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.915 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.915 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:09.915 [2024-11-15 11:45:50.053287] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:09.915 [2024-11-15 11:45:50.054357] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:27:09.915 [2024-11-15 11:45:50.054423] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.915 [2024-11-15 11:45:50.132041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:09.915 [2024-11-15 11:45:50.190495] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.915 [2024-11-15 11:45:50.190561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.915 [2024-11-15 11:45:50.190590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.915 [2024-11-15 11:45:50.190610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.915 [2024-11-15 11:45:50.190619] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.915 [2024-11-15 11:45:50.192210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:09.915 [2024-11-15 11:45:50.192256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:09.915 [2024-11-15 11:45:50.192371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:09.915 [2024-11-15 11:45:50.192376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.915 [2024-11-15 11:45:50.280659] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:09.915 [2024-11-15 11:45:50.280939] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:09.915 [2024-11-15 11:45:50.281831] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:09.915 [2024-11-15 11:45:50.282028] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:09.915 [2024-11-15 11:45:50.282152] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:09.915 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:09.915 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:09.915 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:09.915 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:09.915 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:09.915 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.915 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:09.915 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.915 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:09.915 [2024-11-15 11:45:50.333154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:10.174 Malloc0 00:27:10.174 [2024-11-15 11:45:50.413427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3056510 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3056510 /var/tmp/bdevperf.sock 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3056510 ']' 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:10.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:10.174 { 00:27:10.174 "params": { 00:27:10.174 "name": "Nvme$subsystem", 00:27:10.174 "trtype": "$TEST_TRANSPORT", 00:27:10.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.174 "adrfam": "ipv4", 00:27:10.174 "trsvcid": "$NVMF_PORT", 00:27:10.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.174 "hdgst": ${hdgst:-false}, 00:27:10.174 "ddgst": ${ddgst:-false} 00:27:10.174 }, 00:27:10.174 "method": "bdev_nvme_attach_controller" 00:27:10.174 } 00:27:10.174 EOF 00:27:10.174 )") 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:10.174 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:10.174 "params": { 00:27:10.174 "name": "Nvme0", 00:27:10.174 "trtype": "tcp", 00:27:10.174 "traddr": "10.0.0.2", 00:27:10.174 "adrfam": "ipv4", 00:27:10.174 "trsvcid": "4420", 00:27:10.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:10.174 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:10.174 "hdgst": false, 00:27:10.174 "ddgst": false 00:27:10.174 }, 00:27:10.174 "method": "bdev_nvme_attach_controller" 00:27:10.174 }' 00:27:10.175 [2024-11-15 11:45:50.490121] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:27:10.175 [2024-11-15 11:45:50.490200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3056510 ] 00:27:10.175 [2024-11-15 11:45:50.563199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.433 [2024-11-15 11:45:50.624934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.433 Running I/O for 10 seconds... 00:27:10.690 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:27:10.691 11:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:27:10.950 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:27:10.950 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:10.950 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:10.950 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:10.950 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.950 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:10.950 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.950 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:27:10.950 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:27:10.950 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:27:10.950 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:27:10.950 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:27:10.950 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:10.950 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.950 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:10.950 [2024-11-15 11:45:51.241673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.950 [2024-11-15 11:45:51.241734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.950 [2024-11-15 11:45:51.241762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.950 [2024-11-15 11:45:51.241778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.950 [2024-11-15 11:45:51.241794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.950 [2024-11-15 11:45:51.241808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.950 [2024-11-15 11:45:51.241823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.950 [2024-11-15 11:45:51.241836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.950 [2024-11-15 11:45:51.241851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.950 [2024-11-15 11:45:51.241865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.950 [2024-11-15 11:45:51.241880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.950 [2024-11-15 11:45:51.241894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.950 [2024-11-15 11:45:51.241908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.950 [2024-11-15 11:45:51.241922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.950 [2024-11-15 11:45:51.241938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.950 [2024-11-15 11:45:51.241951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.950 [2024-11-15 11:45:51.241977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.950 [2024-11-15 11:45:51.241993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.950 [2024-11-15 11:45:51.242008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.950 [2024-11-15 11:45:51.242022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.950 [2024-11-15 11:45:51.242037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.950 [2024-11-15 11:45:51.242050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.950 [2024-11-15 11:45:51.242064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.950 [2024-11-15 11:45:51.242078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.950 [2024-11-15 11:45:51.242093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.950 [2024-11-15 11:45:51.242107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.950 [2024-11-15 11:45:51.242122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.950 [2024-11-15 11:45:51.242135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.950 [2024-11-15 11:45:51.242149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.950 [2024-11-15 11:45:51.242162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.950 [2024-11-15 11:45:51.242177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.950 [2024-11-15 11:45:51.242190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.950 [2024-11-15 11:45:51.242204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.950 [2024-11-15 11:45:51.242217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.950 [2024-11-15 11:45:51.242232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.242977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.242992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.243005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.243020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.243033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.243048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.243065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.243081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.243095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.243110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.243123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.243138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.243151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.243166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.243179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.243194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.243207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.243223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.243236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.243251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.243264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.243278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.243292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.243314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.243329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.243344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.243358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.951 [2024-11-15 11:45:51.243378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.951 [2024-11-15 11:45:51.243391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.952 [2024-11-15 11:45:51.243406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.952 [2024-11-15 11:45:51.243420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.952 [2024-11-15 11:45:51.243438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.952 [2024-11-15 11:45:51.243452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.952 [2024-11-15 11:45:51.243467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.952 [2024-11-15 11:45:51.243481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.952 [2024-11-15 11:45:51.243495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.952 [2024-11-15 11:45:51.243509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.952 [2024-11-15 11:45:51.243523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.952 [2024-11-15 11:45:51.243537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.952 [2024-11-15 11:45:51.243551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.952 [2024-11-15 11:45:51.243565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.952 [2024-11-15 11:45:51.243579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.952 [2024-11-15 11:45:51.243593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.952 [2024-11-15 11:45:51.244815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:10.952 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.952 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:10.952 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.952 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:10.952 task offset: 84480 on job bdev=Nvme0n1 fails 00:27:10.952 00:27:10.952 Latency(us) 00:27:10.952 [2024-11-15T10:45:51.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.952 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:10.952 Job: Nvme0n1 ended in about 0.40 seconds with error 00:27:10.952 Verification LBA range: start 0x0 length 0x400 00:27:10.952 Nvme0n1 : 0.40 1606.82 100.43 160.68 0.00 35151.94 2609.30 34564.17 00:27:10.952 [2024-11-15T10:45:51.379Z] =================================================================================================================== 00:27:10.952 [2024-11-15T10:45:51.379Z] Total : 1606.82 100.43 160.68 0.00 35151.94 2609.30 34564.17 00:27:10.952 [2024-11-15 11:45:51.246886] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:10.952 [2024-11-15 11:45:51.246931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe38a40 (9): Bad file descriptor 00:27:10.952 [2024-11-15 11:45:51.248156] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:27:10.952 [2024-11-15 11:45:51.248258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:10.952 [2024-11-15 11:45:51.248286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.952 [2024-11-15 11:45:51.248330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:27:10.952 [2024-11-15 11:45:51.248349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:27:10.952 [2024-11-15 11:45:51.248367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:10.952 [2024-11-15 11:45:51.248379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xe38a40 00:27:10.952 [2024-11-15 11:45:51.248416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe38a40 (9): Bad file descriptor 00:27:10.952 [2024-11-15 11:45:51.248442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:10.952 [2024-11-15 11:45:51.248456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:10.952 [2024-11-15 11:45:51.248470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:10.952 [2024-11-15 11:45:51.248485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:10.952 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.952 11:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:27:11.886 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3056510 00:27:11.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3056510) - No such process 00:27:11.886 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:27:11.886 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:27:11.886 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:11.886 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:27:11.886 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:11.886 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:11.886 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:11.886 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:11.886 { 00:27:11.886 "params": { 00:27:11.886 "name": "Nvme$subsystem", 00:27:11.886 "trtype": "$TEST_TRANSPORT", 00:27:11.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.886 "adrfam": "ipv4", 00:27:11.886 "trsvcid": "$NVMF_PORT", 00:27:11.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.886 "hdgst": ${hdgst:-false}, 00:27:11.886 "ddgst": ${ddgst:-false} 00:27:11.886 }, 00:27:11.886 "method": "bdev_nvme_attach_controller" 00:27:11.886 } 00:27:11.886 EOF 00:27:11.886 )") 00:27:11.886 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:11.886 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:11.886 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:11.886 11:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:11.886 "params": { 00:27:11.886 "name": "Nvme0", 00:27:11.886 "trtype": "tcp", 00:27:11.886 "traddr": "10.0.0.2", 00:27:11.886 "adrfam": "ipv4", 00:27:11.886 "trsvcid": "4420", 00:27:11.886 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:11.886 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:11.886 "hdgst": false, 00:27:11.886 "ddgst": false 00:27:11.886 }, 00:27:11.886 "method": "bdev_nvme_attach_controller" 00:27:11.886 }' 00:27:11.886 [2024-11-15 11:45:52.301758] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:27:11.886 [2024-11-15 11:45:52.301835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3056684 ] 00:27:12.144 [2024-11-15 11:45:52.370241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.144 [2024-11-15 11:45:52.429948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.401 Running I/O for 1 seconds... 00:27:13.334 1654.00 IOPS, 103.38 MiB/s 00:27:13.334 Latency(us) 00:27:13.334 [2024-11-15T10:45:53.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.334 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:13.334 Verification LBA range: start 0x0 length 0x400 00:27:13.334 Nvme0n1 : 1.08 1604.50 100.28 0.00 0.00 37862.22 9417.77 49321.91 00:27:13.334 [2024-11-15T10:45:53.761Z] =================================================================================================================== 00:27:13.334 [2024-11-15T10:45:53.761Z] Total : 1604.50 100.28 0.00 0.00 37862.22 9417.77 49321.91 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:13.592 rmmod nvme_tcp 00:27:13.592 rmmod nvme_fabrics 00:27:13.592 rmmod nvme_keyring 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3056348 ']' 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3056348 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3056348 ']' 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3056348 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:13.592 11:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3056348 00:27:13.592 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:13.592 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:13.592 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3056348' 00:27:13.592 killing process with pid 3056348 00:27:13.592 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3056348 00:27:13.592 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3056348 00:27:13.851 [2024-11-15 11:45:54.207530] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:27:13.851 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:13.851 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:13.851 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:13.851 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:27:13.851 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:27:13.851 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:13.851 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:27:13.851 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:13.851 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:13.851 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.851 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.851 11:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.390 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:16.390 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:16.390 00:27:16.390 real 0m8.698s 00:27:16.390 user 0m17.084s 00:27:16.390 sys 0m3.696s 00:27:16.390 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:16.390 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:16.390 ************************************ 00:27:16.390 END TEST nvmf_host_management 00:27:16.390 ************************************ 00:27:16.390 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:16.390 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:16.390 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:16.390 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:16.390 ************************************ 00:27:16.390 START TEST nvmf_lvol 00:27:16.390 ************************************ 00:27:16.390 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:16.390 * Looking for test storage... 00:27:16.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:16.390 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:16.390 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:27:16.390 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:16.390 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:16.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.391 --rc genhtml_branch_coverage=1 00:27:16.391 --rc genhtml_function_coverage=1 00:27:16.391 --rc genhtml_legend=1 00:27:16.391 --rc geninfo_all_blocks=1 00:27:16.391 --rc geninfo_unexecuted_blocks=1 00:27:16.391 00:27:16.391 ' 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:16.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.391 --rc genhtml_branch_coverage=1 00:27:16.391 --rc genhtml_function_coverage=1 00:27:16.391 --rc genhtml_legend=1 00:27:16.391 --rc geninfo_all_blocks=1 00:27:16.391 --rc geninfo_unexecuted_blocks=1 00:27:16.391 00:27:16.391 ' 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:16.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.391 --rc genhtml_branch_coverage=1 00:27:16.391 --rc genhtml_function_coverage=1 00:27:16.391 --rc genhtml_legend=1 00:27:16.391 --rc geninfo_all_blocks=1 00:27:16.391 --rc geninfo_unexecuted_blocks=1 00:27:16.391 00:27:16.391 ' 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:16.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.391 --rc genhtml_branch_coverage=1 00:27:16.391 --rc genhtml_function_coverage=1 00:27:16.391 --rc genhtml_legend=1 00:27:16.391 --rc geninfo_all_blocks=1 00:27:16.391 --rc geninfo_unexecuted_blocks=1 00:27:16.391 00:27:16.391 ' 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:16.391 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:27:16.392 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:27:16.392 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:16.392 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:27:16.392 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:16.392 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.392 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:16.392 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:16.392 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:16.392 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.392 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.392 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.392 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:16.392 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:16.392 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:27:16.392 11:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:18.296 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:18.296 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:18.296 Found net devices under 0000:09:00.0: cvl_0_0 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:18.296 Found net devices under 0000:09:00.1: cvl_0_1 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.296 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:18.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:27:18.555 00:27:18.555 --- 10.0.0.2 ping statistics --- 00:27:18.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.555 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:27:18.555 00:27:18.555 --- 10.0.0.1 ping statistics --- 00:27:18.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.555 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3058882 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3058882 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3058882 ']' 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.555 11:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:18.555 [2024-11-15 11:45:58.811435] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:18.555 [2024-11-15 11:45:58.812557] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:27:18.555 [2024-11-15 11:45:58.812627] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.555 [2024-11-15 11:45:58.885188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:18.555 [2024-11-15 11:45:58.945109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.555 [2024-11-15 11:45:58.945163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.555 [2024-11-15 11:45:58.945176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.555 [2024-11-15 11:45:58.945187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.555 [2024-11-15 11:45:58.945197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.556 [2024-11-15 11:45:58.946762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.556 [2024-11-15 11:45:58.946827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.556 [2024-11-15 11:45:58.946830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.814 [2024-11-15 11:45:59.047581] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:18.814 [2024-11-15 11:45:59.047811] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:18.814 [2024-11-15 11:45:59.047839] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:18.814 [2024-11-15 11:45:59.048058] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:18.814 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.814 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:27:18.814 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:18.814 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:18.814 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:18.814 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.814 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:19.072 [2024-11-15 11:45:59.355504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.072 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:19.330 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:27:19.330 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:19.588 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:27:19.588 11:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:27:19.847 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:27:20.411 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=1fc434ec-4a21-49f4-aa93-0ebd2d465aea 00:27:20.411 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1fc434ec-4a21-49f4-aa93-0ebd2d465aea lvol 20 00:27:20.669 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b4ccab3c-42fb-4088-9997-99052f557d58 00:27:20.669 11:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:20.926 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b4ccab3c-42fb-4088-9997-99052f557d58 00:27:21.184 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:21.441 [2024-11-15 11:46:01.623674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.441 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:21.699 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3059303 00:27:21.699 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:27:21.699 11:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:27:22.632 11:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b4ccab3c-42fb-4088-9997-99052f557d58 MY_SNAPSHOT 00:27:22.889 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=546d5df3-af4e-4b14-bb83-ec4491832e66 00:27:22.889 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b4ccab3c-42fb-4088-9997-99052f557d58 30 00:27:23.147 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 546d5df3-af4e-4b14-bb83-ec4491832e66 MY_CLONE 00:27:23.406 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=97441511-b6d7-4c6a-9e89-af9d674836e8 00:27:23.406 11:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 97441511-b6d7-4c6a-9e89-af9d674836e8 00:27:23.972 11:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3059303 00:27:32.080 Initializing NVMe Controllers 00:27:32.080 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:32.080 Controller IO queue size 128, less than required. 00:27:32.080 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:32.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:27:32.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:27:32.080 Initialization complete. Launching workers. 00:27:32.080 ======================================================== 00:27:32.080 Latency(us) 00:27:32.080 Device Information : IOPS MiB/s Average min max 00:27:32.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10697.10 41.79 11970.91 5894.48 71248.40 00:27:32.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10530.70 41.14 12158.08 5903.72 79824.29 00:27:32.080 ======================================================== 00:27:32.080 Total : 21227.80 82.92 12063.76 5894.48 79824.29 00:27:32.080 00:27:32.080 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:32.338 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b4ccab3c-42fb-4088-9997-99052f557d58 00:27:32.597 11:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1fc434ec-4a21-49f4-aa93-0ebd2d465aea 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:32.855 rmmod nvme_tcp 00:27:32.855 rmmod nvme_fabrics 00:27:32.855 rmmod nvme_keyring 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3058882 ']' 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3058882 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3058882 ']' 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3058882 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3058882 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3058882' 00:27:32.855 killing process with pid 3058882 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3058882 00:27:32.855 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3058882 00:27:33.114 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:33.114 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:33.114 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:33.114 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:27:33.114 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:27:33.114 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:33.114 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:27:33.114 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:33.114 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:33.114 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.114 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.114 11:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:35.651 00:27:35.651 real 0m19.249s 00:27:35.651 user 0m55.885s 00:27:35.651 sys 0m8.218s 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:35.651 ************************************ 00:27:35.651 END TEST nvmf_lvol 00:27:35.651 ************************************ 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:35.651 ************************************ 00:27:35.651 START TEST nvmf_lvs_grow 00:27:35.651 ************************************ 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:35.651 * Looking for test storage... 00:27:35.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:35.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.651 --rc genhtml_branch_coverage=1 00:27:35.651 --rc genhtml_function_coverage=1 00:27:35.651 --rc genhtml_legend=1 00:27:35.651 --rc geninfo_all_blocks=1 00:27:35.651 --rc geninfo_unexecuted_blocks=1 00:27:35.651 00:27:35.651 ' 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:35.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.651 --rc genhtml_branch_coverage=1 00:27:35.651 --rc genhtml_function_coverage=1 00:27:35.651 --rc genhtml_legend=1 00:27:35.651 --rc geninfo_all_blocks=1 00:27:35.651 --rc geninfo_unexecuted_blocks=1 00:27:35.651 00:27:35.651 ' 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:35.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.651 --rc genhtml_branch_coverage=1 00:27:35.651 --rc genhtml_function_coverage=1 00:27:35.651 --rc genhtml_legend=1 00:27:35.651 --rc geninfo_all_blocks=1 00:27:35.651 --rc geninfo_unexecuted_blocks=1 00:27:35.651 00:27:35.651 ' 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:35.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.651 --rc genhtml_branch_coverage=1 00:27:35.651 --rc genhtml_function_coverage=1 00:27:35.651 --rc genhtml_legend=1 00:27:35.651 --rc geninfo_all_blocks=1 00:27:35.651 --rc geninfo_unexecuted_blocks=1 00:27:35.651 00:27:35.651 ' 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.651 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:27:35.652 11:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:37.557 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:37.557 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:37.557 Found net devices under 0000:09:00.0: cvl_0_0 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.557 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:37.558 Found net devices under 0000:09:00.1: cvl_0_1 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:37.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:27:37.558 00:27:37.558 --- 10.0.0.2 ping statistics --- 00:27:37.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.558 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:37.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:27:37.558 00:27:37.558 --- 10.0.0.1 ping statistics --- 00:27:37.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.558 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:37.558 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:37.817 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:27:37.817 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:37.817 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:37.817 11:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:37.817 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3062557 00:27:37.817 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3062557 00:27:37.817 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3062557 ']' 00:27:37.817 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:27:37.817 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.817 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:37.817 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.817 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:37.817 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:37.817 [2024-11-15 11:46:18.052946] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:37.817 [2024-11-15 11:46:18.054054] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:27:37.817 [2024-11-15 11:46:18.054121] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.817 [2024-11-15 11:46:18.128668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.817 [2024-11-15 11:46:18.185466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.817 [2024-11-15 11:46:18.185517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.817 [2024-11-15 11:46:18.185547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.817 [2024-11-15 11:46:18.185565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.817 [2024-11-15 11:46:18.185576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.817 [2024-11-15 11:46:18.186184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.076 [2024-11-15 11:46:18.284105] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:38.076 [2024-11-15 11:46:18.284447] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:38.076 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:38.076 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:27:38.076 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:38.076 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:38.076 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:38.076 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.076 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:38.335 [2024-11-15 11:46:18.578769] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.335 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:27:38.335 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:38.335 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:38.335 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:38.335 ************************************ 00:27:38.335 START TEST lvs_grow_clean 00:27:38.335 ************************************ 00:27:38.335 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:27:38.335 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:38.335 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:38.335 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:38.335 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:38.335 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:38.335 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:38.335 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:38.335 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:38.335 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:38.593 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:38.593 11:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:38.851 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7ee36791-bccf-4275-a46d-2ac76d2dd2d9 00:27:38.851 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ee36791-bccf-4275-a46d-2ac76d2dd2d9 00:27:38.851 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:39.110 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:39.110 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:39.110 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7ee36791-bccf-4275-a46d-2ac76d2dd2d9 lvol 150 00:27:39.368 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=012ad1ec-7617-45f6-bd9f-5c4609367675 00:27:39.368 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:39.368 11:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:39.626 [2024-11-15 11:46:20.026699] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:39.626 [2024-11-15 11:46:20.026824] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:39.626 true 00:27:39.626 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ee36791-bccf-4275-a46d-2ac76d2dd2d9 00:27:39.626 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:40.194 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:40.194 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:40.194 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 012ad1ec-7617-45f6-bd9f-5c4609367675 00:27:40.761 11:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:40.761 [2024-11-15 11:46:21.150985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.761 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:41.327 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3062998 00:27:41.327 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:41.327 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:41.327 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3062998 /var/tmp/bdevperf.sock 00:27:41.327 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3062998 ']' 00:27:41.327 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:41.327 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.327 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:41.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:41.327 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.327 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:27:41.327 [2024-11-15 11:46:21.492628] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:27:41.327 [2024-11-15 11:46:21.492729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3062998 ] 00:27:41.327 [2024-11-15 11:46:21.561088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.327 [2024-11-15 11:46:21.619931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.328 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.328 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:27:41.328 11:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:41.932 Nvme0n1 00:27:41.932 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:42.224 [ 00:27:42.224 { 00:27:42.224 "name": "Nvme0n1", 00:27:42.224 "aliases": [ 00:27:42.224 "012ad1ec-7617-45f6-bd9f-5c4609367675" 00:27:42.224 ], 00:27:42.224 "product_name": "NVMe disk", 00:27:42.224 "block_size": 4096, 00:27:42.224 "num_blocks": 38912, 00:27:42.224 "uuid": "012ad1ec-7617-45f6-bd9f-5c4609367675", 00:27:42.224 "numa_id": 0, 00:27:42.224 "assigned_rate_limits": { 00:27:42.224 "rw_ios_per_sec": 0, 00:27:42.224 "rw_mbytes_per_sec": 0, 00:27:42.224 "r_mbytes_per_sec": 0, 00:27:42.224 "w_mbytes_per_sec": 0 00:27:42.224 }, 00:27:42.224 "claimed": false, 00:27:42.224 "zoned": false, 00:27:42.224 "supported_io_types": { 00:27:42.224 "read": true, 00:27:42.224 "write": true, 00:27:42.224 "unmap": true, 00:27:42.224 "flush": true, 00:27:42.224 "reset": true, 00:27:42.224 "nvme_admin": true, 00:27:42.224 "nvme_io": true, 00:27:42.224 "nvme_io_md": false, 00:27:42.224 "write_zeroes": true, 00:27:42.224 "zcopy": false, 00:27:42.224 "get_zone_info": false, 00:27:42.224 "zone_management": false, 00:27:42.224 "zone_append": false, 00:27:42.224 "compare": true, 00:27:42.224 "compare_and_write": true, 00:27:42.224 "abort": true, 00:27:42.224 "seek_hole": false, 00:27:42.224 "seek_data": false, 00:27:42.224 "copy": true, 00:27:42.224 "nvme_iov_md": false 00:27:42.224 }, 00:27:42.224 "memory_domains": [ 00:27:42.224 { 00:27:42.224 "dma_device_id": "system", 00:27:42.224 "dma_device_type": 1 00:27:42.224 } 00:27:42.224 ], 00:27:42.224 "driver_specific": { 00:27:42.224 "nvme": [ 00:27:42.224 { 00:27:42.224 "trid": { 00:27:42.224 "trtype": "TCP", 00:27:42.224 "adrfam": "IPv4", 00:27:42.224 "traddr": "10.0.0.2", 00:27:42.224 "trsvcid": "4420", 00:27:42.224 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:42.224 }, 00:27:42.224 "ctrlr_data": { 00:27:42.224 "cntlid": 1, 00:27:42.224 "vendor_id": "0x8086", 00:27:42.225 "model_number": "SPDK bdev Controller", 00:27:42.225 "serial_number": "SPDK0", 00:27:42.225 "firmware_revision": "25.01", 00:27:42.225 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:42.225 "oacs": { 00:27:42.225 "security": 0, 00:27:42.225 "format": 0, 00:27:42.225 "firmware": 0, 00:27:42.225 "ns_manage": 0 00:27:42.225 }, 00:27:42.225 "multi_ctrlr": true, 00:27:42.225 "ana_reporting": false 00:27:42.225 }, 00:27:42.225 "vs": { 00:27:42.225 "nvme_version": "1.3" 00:27:42.225 }, 00:27:42.225 "ns_data": { 00:27:42.225 "id": 1, 00:27:42.225 "can_share": true 00:27:42.225 } 00:27:42.225 } 00:27:42.225 ], 00:27:42.225 "mp_policy": "active_passive" 00:27:42.225 } 00:27:42.225 } 00:27:42.225 ] 00:27:42.225 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3063129 00:27:42.225 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:42.225 11:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:42.225 Running I/O for 10 seconds... 00:27:43.605 Latency(us) 00:27:43.605 [2024-11-15T10:46:24.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:43.605 Nvme0n1 : 1.00 14923.00 58.29 0.00 0.00 0.00 0.00 0.00 00:27:43.605 [2024-11-15T10:46:24.032Z] =================================================================================================================== 00:27:43.605 [2024-11-15T10:46:24.032Z] Total : 14923.00 58.29 0.00 0.00 0.00 0.00 0.00 00:27:43.605 00:27:44.172 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7ee36791-bccf-4275-a46d-2ac76d2dd2d9 00:27:44.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:44.431 Nvme0n1 : 2.00 15130.00 59.10 0.00 0.00 0.00 0.00 0.00 00:27:44.431 [2024-11-15T10:46:24.858Z] =================================================================================================================== 00:27:44.431 [2024-11-15T10:46:24.858Z] Total : 15130.00 59.10 0.00 0.00 0.00 0.00 0.00 00:27:44.431 00:27:44.431 true 00:27:44.431 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ee36791-bccf-4275-a46d-2ac76d2dd2d9 00:27:44.431 11:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:27:44.688 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:27:44.688 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:27:44.688 11:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3063129 00:27:45.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:45.255 Nvme0n1 : 3.00 15124.33 59.08 0.00 0.00 0.00 0.00 0.00 00:27:45.255 [2024-11-15T10:46:25.682Z] =================================================================================================================== 00:27:45.255 [2024-11-15T10:46:25.682Z] Total : 15124.33 59.08 0.00 0.00 0.00 0.00 0.00 00:27:45.255 00:27:46.189 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:46.189 Nvme0n1 : 4.00 15216.75 59.44 0.00 0.00 0.00 0.00 0.00 00:27:46.189 [2024-11-15T10:46:26.616Z] =================================================================================================================== 00:27:46.189 [2024-11-15T10:46:26.616Z] Total : 15216.75 59.44 0.00 0.00 0.00 0.00 0.00 00:27:46.189 00:27:47.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:47.563 Nvme0n1 : 5.00 15297.60 59.76 0.00 0.00 0.00 0.00 0.00 00:27:47.563 [2024-11-15T10:46:27.990Z] =================================================================================================================== 00:27:47.563 [2024-11-15T10:46:27.990Z] Total : 15297.60 59.76 0.00 0.00 0.00 0.00 0.00 00:27:47.563 00:27:48.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:48.498 Nvme0n1 : 6.00 15372.67 60.05 0.00 0.00 0.00 0.00 0.00 00:27:48.498 [2024-11-15T10:46:28.925Z] =================================================================================================================== 00:27:48.498 [2024-11-15T10:46:28.925Z] Total : 15372.67 60.05 0.00 0.00 0.00 0.00 0.00 00:27:48.499 00:27:49.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:49.433 Nvme0n1 : 7.00 15426.29 60.26 0.00 0.00 0.00 0.00 0.00 00:27:49.433 [2024-11-15T10:46:29.860Z] =================================================================================================================== 00:27:49.433 [2024-11-15T10:46:29.860Z] Total : 15426.29 60.26 0.00 0.00 0.00 0.00 0.00 00:27:49.433 00:27:50.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:50.366 Nvme0n1 : 8.00 15450.62 60.35 0.00 0.00 0.00 0.00 0.00 00:27:50.366 [2024-11-15T10:46:30.793Z] =================================================================================================================== 00:27:50.366 [2024-11-15T10:46:30.793Z] Total : 15450.62 60.35 0.00 0.00 0.00 0.00 0.00 00:27:50.366 00:27:51.299 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:51.299 Nvme0n1 : 9.00 15483.67 60.48 0.00 0.00 0.00 0.00 0.00 00:27:51.299 [2024-11-15T10:46:31.726Z] =================================================================================================================== 00:27:51.299 [2024-11-15T10:46:31.726Z] Total : 15483.67 60.48 0.00 0.00 0.00 0.00 0.00 00:27:51.299 00:27:52.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:52.235 Nvme0n1 : 10.00 15497.40 60.54 0.00 0.00 0.00 0.00 0.00 00:27:52.235 [2024-11-15T10:46:32.662Z] =================================================================================================================== 00:27:52.235 [2024-11-15T10:46:32.662Z] Total : 15497.40 60.54 0.00 0.00 0.00 0.00 0.00 00:27:52.235 00:27:52.235 00:27:52.235 Latency(us) 00:27:52.235 [2024-11-15T10:46:32.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:52.235 Nvme0n1 : 10.01 15500.94 60.55 0.00 0.00 8253.08 4271.98 18058.81 00:27:52.235 [2024-11-15T10:46:32.662Z] =================================================================================================================== 00:27:52.235 [2024-11-15T10:46:32.662Z] Total : 15500.94 60.55 0.00 0.00 8253.08 4271.98 18058.81 00:27:52.235 { 00:27:52.235 "results": [ 00:27:52.235 { 00:27:52.235 "job": "Nvme0n1", 00:27:52.235 "core_mask": "0x2", 00:27:52.235 "workload": "randwrite", 00:27:52.235 "status": "finished", 00:27:52.235 "queue_depth": 128, 00:27:52.235 "io_size": 4096, 00:27:52.235 "runtime": 10.005972, 00:27:52.235 "iops": 15500.94283693778, 00:27:52.235 "mibps": 60.550557956788204, 00:27:52.236 "io_failed": 0, 00:27:52.236 "io_timeout": 0, 00:27:52.236 "avg_latency_us": 8253.080933407264, 00:27:52.236 "min_latency_us": 4271.976296296296, 00:27:52.236 "max_latency_us": 18058.80888888889 00:27:52.236 } 00:27:52.236 ], 00:27:52.236 "core_count": 1 00:27:52.236 } 00:27:52.236 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3062998 00:27:52.236 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3062998 ']' 00:27:52.236 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3062998 00:27:52.236 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:27:52.236 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:52.236 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3062998 00:27:52.496 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:52.496 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:52.496 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3062998' 00:27:52.496 killing process with pid 3062998 00:27:52.496 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3062998 00:27:52.496 Received shutdown signal, test time was about 10.000000 seconds 00:27:52.496 00:27:52.496 Latency(us) 00:27:52.496 [2024-11-15T10:46:32.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.496 [2024-11-15T10:46:32.923Z] =================================================================================================================== 00:27:52.496 [2024-11-15T10:46:32.923Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:52.496 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3062998 00:27:52.496 11:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:53.063 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:53.063 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ee36791-bccf-4275-a46d-2ac76d2dd2d9 00:27:53.063 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:27:53.630 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:27:53.630 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:27:53.630 11:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:53.630 [2024-11-15 11:46:34.002717] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:27:53.630 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ee36791-bccf-4275-a46d-2ac76d2dd2d9 00:27:53.630 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:27:53.630 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ee36791-bccf-4275-a46d-2ac76d2dd2d9 00:27:53.630 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:53.630 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.630 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:53.630 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.630 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:53.630 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.630 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:53.630 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:27:53.630 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ee36791-bccf-4275-a46d-2ac76d2dd2d9 00:27:53.888 request: 00:27:53.889 { 00:27:53.889 "uuid": "7ee36791-bccf-4275-a46d-2ac76d2dd2d9", 00:27:53.889 "method": "bdev_lvol_get_lvstores", 00:27:53.889 "req_id": 1 00:27:53.889 } 00:27:53.889 Got JSON-RPC error response 00:27:53.889 response: 00:27:53.889 { 00:27:53.889 "code": -19, 00:27:53.889 "message": "No such device" 00:27:53.889 } 00:27:53.889 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:27:53.889 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:53.889 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:53.889 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:53.889 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:54.147 aio_bdev 00:27:54.405 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 012ad1ec-7617-45f6-bd9f-5c4609367675 00:27:54.405 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=012ad1ec-7617-45f6-bd9f-5c4609367675 00:27:54.405 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:54.405 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:27:54.405 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:54.405 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:54.405 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:54.663 11:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 012ad1ec-7617-45f6-bd9f-5c4609367675 -t 2000 00:27:54.922 [ 00:27:54.922 { 00:27:54.922 "name": "012ad1ec-7617-45f6-bd9f-5c4609367675", 00:27:54.922 "aliases": [ 00:27:54.922 "lvs/lvol" 00:27:54.922 ], 00:27:54.922 "product_name": "Logical Volume", 00:27:54.922 "block_size": 4096, 00:27:54.922 "num_blocks": 38912, 00:27:54.922 "uuid": "012ad1ec-7617-45f6-bd9f-5c4609367675", 00:27:54.922 "assigned_rate_limits": { 00:27:54.922 "rw_ios_per_sec": 0, 00:27:54.922 "rw_mbytes_per_sec": 0, 00:27:54.922 "r_mbytes_per_sec": 0, 00:27:54.922 "w_mbytes_per_sec": 0 00:27:54.922 }, 00:27:54.922 "claimed": false, 00:27:54.922 "zoned": false, 00:27:54.922 "supported_io_types": { 00:27:54.922 "read": true, 00:27:54.922 "write": true, 00:27:54.922 "unmap": true, 00:27:54.922 "flush": false, 00:27:54.922 "reset": true, 00:27:54.922 "nvme_admin": false, 00:27:54.922 "nvme_io": false, 00:27:54.922 "nvme_io_md": false, 00:27:54.922 "write_zeroes": true, 00:27:54.922 "zcopy": false, 00:27:54.922 "get_zone_info": false, 00:27:54.922 "zone_management": false, 00:27:54.922 "zone_append": false, 00:27:54.922 "compare": false, 00:27:54.922 "compare_and_write": false, 00:27:54.922 "abort": false, 00:27:54.922 "seek_hole": true, 00:27:54.922 "seek_data": true, 00:27:54.922 "copy": false, 00:27:54.922 "nvme_iov_md": false 00:27:54.922 }, 00:27:54.922 "driver_specific": { 00:27:54.922 "lvol": { 00:27:54.922 "lvol_store_uuid": "7ee36791-bccf-4275-a46d-2ac76d2dd2d9", 00:27:54.922 "base_bdev": "aio_bdev", 00:27:54.922 "thin_provision": false, 00:27:54.922 "num_allocated_clusters": 38, 00:27:54.922 "snapshot": false, 00:27:54.922 "clone": false, 00:27:54.922 "esnap_clone": false 00:27:54.922 } 00:27:54.922 } 00:27:54.922 } 00:27:54.922 ] 00:27:54.922 11:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:27:54.922 11:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ee36791-bccf-4275-a46d-2ac76d2dd2d9 00:27:54.922 11:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:27:55.181 11:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:27:55.181 11:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7ee36791-bccf-4275-a46d-2ac76d2dd2d9 00:27:55.181 11:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:27:55.439 11:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:27:55.439 11:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 012ad1ec-7617-45f6-bd9f-5c4609367675 00:27:55.697 11:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7ee36791-bccf-4275-a46d-2ac76d2dd2d9 00:27:55.955 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:56.213 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:56.213 00:27:56.213 real 0m17.934s 00:27:56.213 user 0m17.478s 00:27:56.213 sys 0m1.837s 00:27:56.213 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:56.213 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:27:56.213 ************************************ 00:27:56.213 END TEST lvs_grow_clean 00:27:56.213 ************************************ 00:27:56.213 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:27:56.213 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:56.213 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:56.213 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:56.213 ************************************ 00:27:56.213 START TEST lvs_grow_dirty 00:27:56.213 ************************************ 00:27:56.213 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:27:56.213 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:56.213 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:56.213 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:56.213 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:56.213 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:56.213 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:56.213 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:56.213 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:56.213 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:56.780 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:56.780 11:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:56.780 11:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=46a705a2-596c-4931-b2e1-56772a45dd90 00:27:56.780 11:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a705a2-596c-4931-b2e1-56772a45dd90 00:27:56.780 11:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:57.038 11:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:57.038 11:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:57.038 11:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 46a705a2-596c-4931-b2e1-56772a45dd90 lvol 150 00:27:57.604 11:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=44bf0476-4847-433c-bb16-728480d3d536 00:27:57.604 11:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:57.604 11:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:57.604 [2024-11-15 11:46:37.994667] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:57.604 [2024-11-15 11:46:37.994782] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:57.604 true 00:27:57.604 11:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a705a2-596c-4931-b2e1-56772a45dd90 00:27:57.604 11:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:57.862 11:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:57.862 11:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:58.428 11:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 44bf0476-4847-433c-bb16-728480d3d536 00:27:58.428 11:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:58.685 [2024-11-15 11:46:39.066916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.686 11:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:58.944 11:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3065146 00:27:58.944 11:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:58.944 11:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:58.944 11:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3065146 /var/tmp/bdevperf.sock 00:27:58.944 11:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3065146 ']' 00:27:58.944 11:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:58.944 11:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:58.944 11:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:58.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:58.944 11:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:58.944 11:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:59.202 [2024-11-15 11:46:39.397377] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:27:59.202 [2024-11-15 11:46:39.397451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3065146 ] 00:27:59.202 [2024-11-15 11:46:39.462222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.202 [2024-11-15 11:46:39.519219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.202 11:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:59.202 11:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:27:59.202 11:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:59.768 Nvme0n1 00:27:59.768 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:00.026 [ 00:28:00.026 { 00:28:00.026 "name": "Nvme0n1", 00:28:00.026 "aliases": [ 00:28:00.026 "44bf0476-4847-433c-bb16-728480d3d536" 00:28:00.026 ], 00:28:00.026 "product_name": "NVMe disk", 00:28:00.026 "block_size": 4096, 00:28:00.026 "num_blocks": 38912, 00:28:00.026 "uuid": "44bf0476-4847-433c-bb16-728480d3d536", 00:28:00.026 "numa_id": 0, 00:28:00.026 "assigned_rate_limits": { 00:28:00.026 "rw_ios_per_sec": 0, 00:28:00.026 "rw_mbytes_per_sec": 0, 00:28:00.026 "r_mbytes_per_sec": 0, 00:28:00.026 "w_mbytes_per_sec": 0 00:28:00.026 }, 00:28:00.026 "claimed": false, 00:28:00.026 "zoned": false, 00:28:00.026 "supported_io_types": { 00:28:00.026 "read": true, 00:28:00.026 "write": true, 00:28:00.026 "unmap": true, 00:28:00.026 "flush": true, 00:28:00.026 "reset": true, 00:28:00.026 "nvme_admin": true, 00:28:00.026 "nvme_io": true, 00:28:00.026 "nvme_io_md": false, 00:28:00.026 "write_zeroes": true, 00:28:00.026 "zcopy": false, 00:28:00.026 "get_zone_info": false, 00:28:00.026 "zone_management": false, 00:28:00.026 "zone_append": false, 00:28:00.026 "compare": true, 00:28:00.026 "compare_and_write": true, 00:28:00.026 "abort": true, 00:28:00.026 "seek_hole": false, 00:28:00.026 "seek_data": false, 00:28:00.026 "copy": true, 00:28:00.026 "nvme_iov_md": false 00:28:00.026 }, 00:28:00.026 "memory_domains": [ 00:28:00.026 { 00:28:00.026 "dma_device_id": "system", 00:28:00.026 "dma_device_type": 1 00:28:00.026 } 00:28:00.026 ], 00:28:00.026 "driver_specific": { 00:28:00.026 "nvme": [ 00:28:00.026 { 00:28:00.026 "trid": { 00:28:00.026 "trtype": "TCP", 00:28:00.026 "adrfam": "IPv4", 00:28:00.026 "traddr": "10.0.0.2", 00:28:00.026 "trsvcid": "4420", 00:28:00.026 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:00.026 }, 00:28:00.026 "ctrlr_data": { 00:28:00.026 "cntlid": 1, 00:28:00.026 "vendor_id": "0x8086", 00:28:00.026 "model_number": "SPDK bdev Controller", 00:28:00.026 "serial_number": "SPDK0", 00:28:00.026 "firmware_revision": "25.01", 00:28:00.026 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:00.026 "oacs": { 00:28:00.026 "security": 0, 00:28:00.026 "format": 0, 00:28:00.026 "firmware": 0, 00:28:00.026 "ns_manage": 0 00:28:00.026 }, 00:28:00.026 "multi_ctrlr": true, 00:28:00.026 "ana_reporting": false 00:28:00.026 }, 00:28:00.026 "vs": { 00:28:00.026 "nvme_version": "1.3" 00:28:00.026 }, 00:28:00.026 "ns_data": { 00:28:00.026 "id": 1, 00:28:00.026 "can_share": true 00:28:00.026 } 00:28:00.026 } 00:28:00.026 ], 00:28:00.026 "mp_policy": "active_passive" 00:28:00.026 } 00:28:00.026 } 00:28:00.026 ] 00:28:00.026 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3065190 00:28:00.026 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:00.026 11:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:00.284 Running I/O for 10 seconds... 00:28:01.218 Latency(us) 00:28:01.218 [2024-11-15T10:46:41.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:01.218 Nvme0n1 : 1.00 14639.00 57.18 0.00 0.00 0.00 0.00 0.00 00:28:01.218 [2024-11-15T10:46:41.645Z] =================================================================================================================== 00:28:01.218 [2024-11-15T10:46:41.645Z] Total : 14639.00 57.18 0.00 0.00 0.00 0.00 0.00 00:28:01.218 00:28:02.152 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 46a705a2-596c-4931-b2e1-56772a45dd90 00:28:02.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:02.152 Nvme0n1 : 2.00 14939.50 58.36 0.00 0.00 0.00 0.00 0.00 00:28:02.152 [2024-11-15T10:46:42.579Z] =================================================================================================================== 00:28:02.152 [2024-11-15T10:46:42.579Z] Total : 14939.50 58.36 0.00 0.00 0.00 0.00 0.00 00:28:02.152 00:28:02.410 true 00:28:02.410 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a705a2-596c-4931-b2e1-56772a45dd90 00:28:02.410 11:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:02.669 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:02.669 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:02.669 11:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3065190 00:28:03.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:03.235 Nvme0n1 : 3.00 15039.67 58.75 0.00 0.00 0.00 0.00 0.00 00:28:03.235 [2024-11-15T10:46:43.662Z] =================================================================================================================== 00:28:03.235 [2024-11-15T10:46:43.662Z] Total : 15039.67 58.75 0.00 0.00 0.00 0.00 0.00 00:28:03.235 00:28:04.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:04.169 Nvme0n1 : 4.00 15153.25 59.19 0.00 0.00 0.00 0.00 0.00 00:28:04.169 [2024-11-15T10:46:44.596Z] =================================================================================================================== 00:28:04.169 [2024-11-15T10:46:44.596Z] Total : 15153.25 59.19 0.00 0.00 0.00 0.00 0.00 00:28:04.169 00:28:05.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:05.107 Nvme0n1 : 5.00 15221.40 59.46 0.00 0.00 0.00 0.00 0.00 00:28:05.107 [2024-11-15T10:46:45.534Z] =================================================================================================================== 00:28:05.107 [2024-11-15T10:46:45.534Z] Total : 15221.40 59.46 0.00 0.00 0.00 0.00 0.00 00:28:05.107 00:28:06.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:06.481 Nvme0n1 : 6.00 15288.00 59.72 0.00 0.00 0.00 0.00 0.00 00:28:06.481 [2024-11-15T10:46:46.908Z] =================================================================================================================== 00:28:06.481 [2024-11-15T10:46:46.908Z] Total : 15288.00 59.72 0.00 0.00 0.00 0.00 0.00 00:28:06.481 00:28:07.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:07.416 Nvme0n1 : 7.00 15335.57 59.90 0.00 0.00 0.00 0.00 0.00 00:28:07.416 [2024-11-15T10:46:47.843Z] =================================================================================================================== 00:28:07.416 [2024-11-15T10:46:47.843Z] Total : 15335.57 59.90 0.00 0.00 0.00 0.00 0.00 00:28:07.416 00:28:08.349 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:08.349 Nvme0n1 : 8.00 15387.12 60.11 0.00 0.00 0.00 0.00 0.00 00:28:08.349 [2024-11-15T10:46:48.776Z] =================================================================================================================== 00:28:08.349 [2024-11-15T10:46:48.776Z] Total : 15387.12 60.11 0.00 0.00 0.00 0.00 0.00 00:28:08.349 00:28:09.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:09.283 Nvme0n1 : 9.00 15427.22 60.26 0.00 0.00 0.00 0.00 0.00 00:28:09.283 [2024-11-15T10:46:49.710Z] =================================================================================================================== 00:28:09.283 [2024-11-15T10:46:49.710Z] Total : 15427.22 60.26 0.00 0.00 0.00 0.00 0.00 00:28:09.283 00:28:10.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:10.218 Nvme0n1 : 10.00 15433.90 60.29 0.00 0.00 0.00 0.00 0.00 00:28:10.218 [2024-11-15T10:46:50.645Z] =================================================================================================================== 00:28:10.218 [2024-11-15T10:46:50.645Z] Total : 15433.90 60.29 0.00 0.00 0.00 0.00 0.00 00:28:10.218 00:28:10.218 00:28:10.218 Latency(us) 00:28:10.218 [2024-11-15T10:46:50.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:10.218 Nvme0n1 : 10.01 15438.01 60.30 0.00 0.00 8286.57 4466.16 18835.53 00:28:10.218 [2024-11-15T10:46:50.645Z] =================================================================================================================== 00:28:10.218 [2024-11-15T10:46:50.645Z] Total : 15438.01 60.30 0.00 0.00 8286.57 4466.16 18835.53 00:28:10.218 { 00:28:10.218 "results": [ 00:28:10.218 { 00:28:10.218 "job": "Nvme0n1", 00:28:10.218 "core_mask": "0x2", 00:28:10.218 "workload": "randwrite", 00:28:10.218 "status": "finished", 00:28:10.218 "queue_depth": 128, 00:28:10.218 "io_size": 4096, 00:28:10.218 "runtime": 10.005631, 00:28:10.218 "iops": 15438.00685833807, 00:28:10.218 "mibps": 60.30471429038309, 00:28:10.218 "io_failed": 0, 00:28:10.218 "io_timeout": 0, 00:28:10.218 "avg_latency_us": 8286.56988130031, 00:28:10.218 "min_latency_us": 4466.157037037037, 00:28:10.218 "max_latency_us": 18835.53185185185 00:28:10.218 } 00:28:10.218 ], 00:28:10.218 "core_count": 1 00:28:10.218 } 00:28:10.218 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3065146 00:28:10.218 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3065146 ']' 00:28:10.218 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3065146 00:28:10.218 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:28:10.218 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:10.218 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3065146 00:28:10.218 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:10.218 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:10.218 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3065146' 00:28:10.218 killing process with pid 3065146 00:28:10.218 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3065146 00:28:10.218 Received shutdown signal, test time was about 10.000000 seconds 00:28:10.218 00:28:10.218 Latency(us) 00:28:10.218 [2024-11-15T10:46:50.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.218 [2024-11-15T10:46:50.645Z] =================================================================================================================== 00:28:10.218 [2024-11-15T10:46:50.645Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:10.218 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3065146 00:28:10.476 11:46:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:10.735 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:10.994 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a705a2-596c-4931-b2e1-56772a45dd90 00:28:10.994 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:11.252 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:11.252 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:28:11.252 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3062557 00:28:11.252 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3062557 00:28:11.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3062557 Killed "${NVMF_APP[@]}" "$@" 00:28:11.511 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:28:11.511 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:28:11.511 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:11.511 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:11.511 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:11.511 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3066496 00:28:11.511 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:11.511 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3066496 00:28:11.511 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3066496 ']' 00:28:11.511 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.511 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:11.511 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.511 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:11.511 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:11.511 [2024-11-15 11:46:51.731432] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:11.511 [2024-11-15 11:46:51.732536] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:28:11.511 [2024-11-15 11:46:51.732604] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.511 [2024-11-15 11:46:51.808158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.511 [2024-11-15 11:46:51.866732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.511 [2024-11-15 11:46:51.866784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.511 [2024-11-15 11:46:51.866813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.511 [2024-11-15 11:46:51.866825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.511 [2024-11-15 11:46:51.866835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.511 [2024-11-15 11:46:51.867434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.770 [2024-11-15 11:46:51.961792] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:11.770 [2024-11-15 11:46:51.962091] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:11.770 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:11.770 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:11.770 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:11.770 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:11.770 11:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:11.770 11:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.770 11:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:12.028 [2024-11-15 11:46:52.262339] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:12.028 [2024-11-15 11:46:52.262504] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:12.028 [2024-11-15 11:46:52.262553] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:12.028 11:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:28:12.028 11:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 44bf0476-4847-433c-bb16-728480d3d536 00:28:12.028 11:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=44bf0476-4847-433c-bb16-728480d3d536 00:28:12.028 11:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:12.028 11:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:12.028 11:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:12.028 11:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:12.028 11:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:12.286 11:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 44bf0476-4847-433c-bb16-728480d3d536 -t 2000 00:28:12.544 [ 00:28:12.544 { 00:28:12.544 "name": "44bf0476-4847-433c-bb16-728480d3d536", 00:28:12.544 "aliases": [ 00:28:12.544 "lvs/lvol" 00:28:12.544 ], 00:28:12.544 "product_name": "Logical Volume", 00:28:12.544 "block_size": 4096, 00:28:12.544 "num_blocks": 38912, 00:28:12.544 "uuid": "44bf0476-4847-433c-bb16-728480d3d536", 00:28:12.544 "assigned_rate_limits": { 00:28:12.544 "rw_ios_per_sec": 0, 00:28:12.544 "rw_mbytes_per_sec": 0, 00:28:12.544 "r_mbytes_per_sec": 0, 00:28:12.544 "w_mbytes_per_sec": 0 00:28:12.544 }, 00:28:12.544 "claimed": false, 00:28:12.544 "zoned": false, 00:28:12.544 "supported_io_types": { 00:28:12.544 "read": true, 00:28:12.544 "write": true, 00:28:12.545 "unmap": true, 00:28:12.545 "flush": false, 00:28:12.545 "reset": true, 00:28:12.545 "nvme_admin": false, 00:28:12.545 "nvme_io": false, 00:28:12.545 "nvme_io_md": false, 00:28:12.545 "write_zeroes": true, 00:28:12.545 "zcopy": false, 00:28:12.545 "get_zone_info": false, 00:28:12.545 "zone_management": false, 00:28:12.545 "zone_append": false, 00:28:12.545 "compare": false, 00:28:12.545 "compare_and_write": false, 00:28:12.545 "abort": false, 00:28:12.545 "seek_hole": true, 00:28:12.545 "seek_data": true, 00:28:12.545 "copy": false, 00:28:12.545 "nvme_iov_md": false 00:28:12.545 }, 00:28:12.545 "driver_specific": { 00:28:12.545 "lvol": { 00:28:12.545 "lvol_store_uuid": "46a705a2-596c-4931-b2e1-56772a45dd90", 00:28:12.545 "base_bdev": "aio_bdev", 00:28:12.545 "thin_provision": false, 00:28:12.545 "num_allocated_clusters": 38, 00:28:12.545 "snapshot": false, 00:28:12.545 "clone": false, 00:28:12.545 "esnap_clone": false 00:28:12.545 } 00:28:12.545 } 00:28:12.545 } 00:28:12.545 ] 00:28:12.545 11:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:12.545 11:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a705a2-596c-4931-b2e1-56772a45dd90 00:28:12.545 11:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:28:12.843 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:28:12.843 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a705a2-596c-4931-b2e1-56772a45dd90 00:28:12.843 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:28:13.136 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:28:13.136 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:13.419 [2024-11-15 11:46:53.656132] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:13.419 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a705a2-596c-4931-b2e1-56772a45dd90 00:28:13.419 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:28:13.419 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a705a2-596c-4931-b2e1-56772a45dd90 00:28:13.419 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:13.419 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:13.419 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:13.419 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:13.419 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:13.419 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:13.419 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:13.419 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:13.419 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a705a2-596c-4931-b2e1-56772a45dd90 00:28:13.677 request: 00:28:13.677 { 00:28:13.677 "uuid": "46a705a2-596c-4931-b2e1-56772a45dd90", 00:28:13.677 "method": "bdev_lvol_get_lvstores", 00:28:13.677 "req_id": 1 00:28:13.677 } 00:28:13.677 Got JSON-RPC error response 00:28:13.677 response: 00:28:13.677 { 00:28:13.677 "code": -19, 00:28:13.677 "message": "No such device" 00:28:13.677 } 00:28:13.677 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:28:13.677 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:13.677 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:13.677 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:13.677 11:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:13.935 aio_bdev 00:28:13.935 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 44bf0476-4847-433c-bb16-728480d3d536 00:28:13.935 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=44bf0476-4847-433c-bb16-728480d3d536 00:28:13.935 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:13.935 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:13.935 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:13.935 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:13.935 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:14.194 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 44bf0476-4847-433c-bb16-728480d3d536 -t 2000 00:28:14.452 [ 00:28:14.452 { 00:28:14.452 "name": "44bf0476-4847-433c-bb16-728480d3d536", 00:28:14.452 "aliases": [ 00:28:14.452 "lvs/lvol" 00:28:14.452 ], 00:28:14.452 "product_name": "Logical Volume", 00:28:14.452 "block_size": 4096, 00:28:14.452 "num_blocks": 38912, 00:28:14.452 "uuid": "44bf0476-4847-433c-bb16-728480d3d536", 00:28:14.452 "assigned_rate_limits": { 00:28:14.452 "rw_ios_per_sec": 0, 00:28:14.452 "rw_mbytes_per_sec": 0, 00:28:14.452 "r_mbytes_per_sec": 0, 00:28:14.452 "w_mbytes_per_sec": 0 00:28:14.452 }, 00:28:14.452 "claimed": false, 00:28:14.452 "zoned": false, 00:28:14.452 "supported_io_types": { 00:28:14.452 "read": true, 00:28:14.452 "write": true, 00:28:14.452 "unmap": true, 00:28:14.452 "flush": false, 00:28:14.452 "reset": true, 00:28:14.452 "nvme_admin": false, 00:28:14.452 "nvme_io": false, 00:28:14.452 "nvme_io_md": false, 00:28:14.452 "write_zeroes": true, 00:28:14.452 "zcopy": false, 00:28:14.452 "get_zone_info": false, 00:28:14.452 "zone_management": false, 00:28:14.452 "zone_append": false, 00:28:14.452 "compare": false, 00:28:14.452 "compare_and_write": false, 00:28:14.452 "abort": false, 00:28:14.452 "seek_hole": true, 00:28:14.452 "seek_data": true, 00:28:14.452 "copy": false, 00:28:14.452 "nvme_iov_md": false 00:28:14.452 }, 00:28:14.452 "driver_specific": { 00:28:14.452 "lvol": { 00:28:14.452 "lvol_store_uuid": "46a705a2-596c-4931-b2e1-56772a45dd90", 00:28:14.452 "base_bdev": "aio_bdev", 00:28:14.452 "thin_provision": false, 00:28:14.452 "num_allocated_clusters": 38, 00:28:14.452 "snapshot": false, 00:28:14.452 "clone": false, 00:28:14.452 "esnap_clone": false 00:28:14.452 } 00:28:14.452 } 00:28:14.452 } 00:28:14.452 ] 00:28:14.452 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:14.452 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a705a2-596c-4931-b2e1-56772a45dd90 00:28:14.452 11:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:14.711 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:14.711 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 46a705a2-596c-4931-b2e1-56772a45dd90 00:28:14.711 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:14.969 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:14.969 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 44bf0476-4847-433c-bb16-728480d3d536 00:28:15.227 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 46a705a2-596c-4931-b2e1-56772a45dd90 00:28:15.485 11:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:16.051 00:28:16.051 real 0m19.584s 00:28:16.051 user 0m36.739s 00:28:16.051 sys 0m4.569s 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:16.051 ************************************ 00:28:16.051 END TEST lvs_grow_dirty 00:28:16.051 ************************************ 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:16.051 nvmf_trace.0 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:16.051 rmmod nvme_tcp 00:28:16.051 rmmod nvme_fabrics 00:28:16.051 rmmod nvme_keyring 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3066496 ']' 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3066496 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3066496 ']' 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3066496 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3066496 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3066496' 00:28:16.051 killing process with pid 3066496 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3066496 00:28:16.051 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3066496 00:28:16.312 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:16.312 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:16.312 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:16.312 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:28:16.312 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:28:16.312 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:16.312 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:28:16.312 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:16.312 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:16.312 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.312 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.312 11:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.219 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:18.219 00:28:18.219 real 0m42.986s 00:28:18.219 user 0m55.956s 00:28:18.219 sys 0m8.429s 00:28:18.219 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:18.219 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:18.219 ************************************ 00:28:18.219 END TEST nvmf_lvs_grow 00:28:18.219 ************************************ 00:28:18.219 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:18.219 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:18.219 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:18.219 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:18.478 ************************************ 00:28:18.478 START TEST nvmf_bdev_io_wait 00:28:18.478 ************************************ 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:18.478 * Looking for test storage... 00:28:18.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:18.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.478 --rc genhtml_branch_coverage=1 00:28:18.478 --rc genhtml_function_coverage=1 00:28:18.478 --rc genhtml_legend=1 00:28:18.478 --rc geninfo_all_blocks=1 00:28:18.478 --rc geninfo_unexecuted_blocks=1 00:28:18.478 00:28:18.478 ' 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:18.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.478 --rc genhtml_branch_coverage=1 00:28:18.478 --rc genhtml_function_coverage=1 00:28:18.478 --rc genhtml_legend=1 00:28:18.478 --rc geninfo_all_blocks=1 00:28:18.478 --rc geninfo_unexecuted_blocks=1 00:28:18.478 00:28:18.478 ' 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:18.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.478 --rc genhtml_branch_coverage=1 00:28:18.478 --rc genhtml_function_coverage=1 00:28:18.478 --rc genhtml_legend=1 00:28:18.478 --rc geninfo_all_blocks=1 00:28:18.478 --rc geninfo_unexecuted_blocks=1 00:28:18.478 00:28:18.478 ' 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:18.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.478 --rc genhtml_branch_coverage=1 00:28:18.478 --rc genhtml_function_coverage=1 00:28:18.478 --rc genhtml_legend=1 00:28:18.478 --rc geninfo_all_blocks=1 00:28:18.478 --rc geninfo_unexecuted_blocks=1 00:28:18.478 00:28:18.478 ' 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.478 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:28:18.479 11:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:21.013 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:21.013 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:21.013 Found net devices under 0000:09:00.0: cvl_0_0 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.013 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:21.014 Found net devices under 0000:09:00.1: cvl_0_1 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:21.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:28:21.014 00:28:21.014 --- 10.0.0.2 ping statistics --- 00:28:21.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.014 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:28:21.014 00:28:21.014 --- 10.0.0.1 ping statistics --- 00:28:21.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.014 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:21.014 11:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3069095 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3069095 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3069095 ']' 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:21.014 [2024-11-15 11:47:01.079498] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:21.014 [2024-11-15 11:47:01.080543] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:28:21.014 [2024-11-15 11:47:01.080603] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.014 [2024-11-15 11:47:01.151710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:21.014 [2024-11-15 11:47:01.210397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.014 [2024-11-15 11:47:01.210446] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.014 [2024-11-15 11:47:01.210474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.014 [2024-11-15 11:47:01.210487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.014 [2024-11-15 11:47:01.210496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.014 [2024-11-15 11:47:01.212149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.014 [2024-11-15 11:47:01.212217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.014 [2024-11-15 11:47:01.212267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.014 [2024-11-15 11:47:01.212270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.014 [2024-11-15 11:47:01.212769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:21.014 [2024-11-15 11:47:01.398059] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:21.014 [2024-11-15 11:47:01.398268] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:21.014 [2024-11-15 11:47:01.399189] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:21.014 [2024-11-15 11:47:01.400035] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:21.014 [2024-11-15 11:47:01.404944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.014 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.015 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:21.015 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.015 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:21.273 Malloc0 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:21.273 [2024-11-15 11:47:01.461134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3069166 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3069168 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3069170 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:21.273 { 00:28:21.273 "params": { 00:28:21.273 "name": "Nvme$subsystem", 00:28:21.273 "trtype": "$TEST_TRANSPORT", 00:28:21.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.273 "adrfam": "ipv4", 00:28:21.273 "trsvcid": "$NVMF_PORT", 00:28:21.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.273 "hdgst": ${hdgst:-false}, 00:28:21.273 "ddgst": ${ddgst:-false} 00:28:21.273 }, 00:28:21.273 "method": "bdev_nvme_attach_controller" 00:28:21.273 } 00:28:21.273 EOF 00:28:21.273 )") 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3069172 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:21.273 { 00:28:21.273 "params": { 00:28:21.273 "name": "Nvme$subsystem", 00:28:21.273 "trtype": "$TEST_TRANSPORT", 00:28:21.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.273 "adrfam": "ipv4", 00:28:21.273 "trsvcid": "$NVMF_PORT", 00:28:21.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.273 "hdgst": ${hdgst:-false}, 00:28:21.273 "ddgst": ${ddgst:-false} 00:28:21.273 }, 00:28:21.273 "method": "bdev_nvme_attach_controller" 00:28:21.273 } 00:28:21.273 EOF 00:28:21.273 )") 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:21.273 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:21.274 { 00:28:21.274 "params": { 00:28:21.274 "name": "Nvme$subsystem", 00:28:21.274 "trtype": "$TEST_TRANSPORT", 00:28:21.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.274 "adrfam": "ipv4", 00:28:21.274 "trsvcid": "$NVMF_PORT", 00:28:21.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.274 "hdgst": ${hdgst:-false}, 00:28:21.274 "ddgst": ${ddgst:-false} 00:28:21.274 }, 00:28:21.274 "method": "bdev_nvme_attach_controller" 00:28:21.274 } 00:28:21.274 EOF 00:28:21.274 )") 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:21.274 { 00:28:21.274 "params": { 00:28:21.274 "name": "Nvme$subsystem", 00:28:21.274 "trtype": "$TEST_TRANSPORT", 00:28:21.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.274 "adrfam": "ipv4", 00:28:21.274 "trsvcid": "$NVMF_PORT", 00:28:21.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.274 "hdgst": ${hdgst:-false}, 00:28:21.274 "ddgst": ${ddgst:-false} 00:28:21.274 }, 00:28:21.274 "method": "bdev_nvme_attach_controller" 00:28:21.274 } 00:28:21.274 EOF 00:28:21.274 )") 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3069166 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:21.274 "params": { 00:28:21.274 "name": "Nvme1", 00:28:21.274 "trtype": "tcp", 00:28:21.274 "traddr": "10.0.0.2", 00:28:21.274 "adrfam": "ipv4", 00:28:21.274 "trsvcid": "4420", 00:28:21.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:21.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:21.274 "hdgst": false, 00:28:21.274 "ddgst": false 00:28:21.274 }, 00:28:21.274 "method": "bdev_nvme_attach_controller" 00:28:21.274 }' 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:21.274 "params": { 00:28:21.274 "name": "Nvme1", 00:28:21.274 "trtype": "tcp", 00:28:21.274 "traddr": "10.0.0.2", 00:28:21.274 "adrfam": "ipv4", 00:28:21.274 "trsvcid": "4420", 00:28:21.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:21.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:21.274 "hdgst": false, 00:28:21.274 "ddgst": false 00:28:21.274 }, 00:28:21.274 "method": "bdev_nvme_attach_controller" 00:28:21.274 }' 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:21.274 "params": { 00:28:21.274 "name": "Nvme1", 00:28:21.274 "trtype": "tcp", 00:28:21.274 "traddr": "10.0.0.2", 00:28:21.274 "adrfam": "ipv4", 00:28:21.274 "trsvcid": "4420", 00:28:21.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:21.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:21.274 "hdgst": false, 00:28:21.274 "ddgst": false 00:28:21.274 }, 00:28:21.274 "method": "bdev_nvme_attach_controller" 00:28:21.274 }' 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:21.274 11:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:21.274 "params": { 00:28:21.274 "name": "Nvme1", 00:28:21.274 "trtype": "tcp", 00:28:21.274 "traddr": "10.0.0.2", 00:28:21.274 "adrfam": "ipv4", 00:28:21.274 "trsvcid": "4420", 00:28:21.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:21.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:21.274 "hdgst": false, 00:28:21.274 "ddgst": false 00:28:21.274 }, 00:28:21.274 "method": "bdev_nvme_attach_controller" 00:28:21.274 }' 00:28:21.274 [2024-11-15 11:47:01.513447] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:28:21.274 [2024-11-15 11:47:01.513447] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:28:21.274 [2024-11-15 11:47:01.513447] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:28:21.274 [2024-11-15 11:47:01.513456] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:28:21.274 [2024-11-15 11:47:01.513547] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-15 11:47:01.513547] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-15 11:47:01.513548] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 [2024-11-15 11:47:01.513550] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:28:21.274 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:28:21.274 --proc-type=auto ] 00:28:21.274 --proc-type=auto ] 00:28:21.532 [2024-11-15 11:47:01.700797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.532 [2024-11-15 11:47:01.756720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:21.532 [2024-11-15 11:47:01.804292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.532 [2024-11-15 11:47:01.860138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:21.532 [2024-11-15 11:47:01.907027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.790 [2024-11-15 11:47:01.964351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:21.790 [2024-11-15 11:47:01.984708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.790 [2024-11-15 11:47:02.038127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:21.790 Running I/O for 1 seconds... 00:28:21.790 Running I/O for 1 seconds... 00:28:22.049 Running I/O for 1 seconds... 00:28:22.049 Running I/O for 1 seconds... 00:28:22.983 6802.00 IOPS, 26.57 MiB/s [2024-11-15T10:47:03.410Z] 169632.00 IOPS, 662.62 MiB/s 00:28:22.983 Latency(us) 00:28:22.983 [2024-11-15T10:47:03.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.983 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:28:22.983 Nvme1n1 : 1.00 169289.23 661.29 0.00 0.00 752.01 335.27 2026.76 00:28:22.983 [2024-11-15T10:47:03.410Z] =================================================================================================================== 00:28:22.983 [2024-11-15T10:47:03.410Z] Total : 169289.23 661.29 0.00 0.00 752.01 335.27 2026.76 00:28:22.983 00:28:22.983 Latency(us) 00:28:22.983 [2024-11-15T10:47:03.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.983 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:28:22.983 Nvme1n1 : 1.03 6761.49 26.41 0.00 0.00 18680.88 4441.88 32622.36 00:28:22.983 [2024-11-15T10:47:03.410Z] =================================================================================================================== 00:28:22.983 [2024-11-15T10:47:03.410Z] Total : 6761.49 26.41 0.00 0.00 18680.88 4441.88 32622.36 00:28:22.983 6381.00 IOPS, 24.93 MiB/s 00:28:22.983 Latency(us) 00:28:22.983 [2024-11-15T10:47:03.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.983 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:28:22.983 Nvme1n1 : 1.01 6482.10 25.32 0.00 0.00 19674.84 5752.60 31068.92 00:28:22.983 [2024-11-15T10:47:03.410Z] =================================================================================================================== 00:28:22.983 [2024-11-15T10:47:03.410Z] Total : 6482.10 25.32 0.00 0.00 19674.84 5752.60 31068.92 00:28:22.983 8844.00 IOPS, 34.55 MiB/s 00:28:22.983 Latency(us) 00:28:22.983 [2024-11-15T10:47:03.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.983 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:28:22.983 Nvme1n1 : 1.01 8893.05 34.74 0.00 0.00 14323.96 4660.34 18932.62 00:28:22.983 [2024-11-15T10:47:03.410Z] =================================================================================================================== 00:28:22.983 [2024-11-15T10:47:03.410Z] Total : 8893.05 34.74 0.00 0.00 14323.96 4660.34 18932.62 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3069168 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3069170 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3069172 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.241 rmmod nvme_tcp 00:28:23.241 rmmod nvme_fabrics 00:28:23.241 rmmod nvme_keyring 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3069095 ']' 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3069095 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3069095 ']' 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3069095 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3069095 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3069095' 00:28:23.241 killing process with pid 3069095 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3069095 00:28:23.241 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3069095 00:28:23.500 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.500 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.500 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.500 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:28:23.500 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:28:23.500 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.500 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.500 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.500 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.500 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.500 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.500 11:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.036 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:26.036 00:28:26.036 real 0m7.191s 00:28:26.036 user 0m14.609s 00:28:26.036 sys 0m3.885s 00:28:26.036 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:26.036 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:26.036 ************************************ 00:28:26.036 END TEST nvmf_bdev_io_wait 00:28:26.036 ************************************ 00:28:26.036 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:26.036 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:26.036 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:26.036 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:26.036 ************************************ 00:28:26.036 START TEST nvmf_queue_depth 00:28:26.036 ************************************ 00:28:26.036 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:26.036 * Looking for test storage... 00:28:26.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:26.036 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:26.036 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:28:26.036 11:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:26.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.036 --rc genhtml_branch_coverage=1 00:28:26.036 --rc genhtml_function_coverage=1 00:28:26.036 --rc genhtml_legend=1 00:28:26.036 --rc geninfo_all_blocks=1 00:28:26.036 --rc geninfo_unexecuted_blocks=1 00:28:26.036 00:28:26.036 ' 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:26.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.036 --rc genhtml_branch_coverage=1 00:28:26.036 --rc genhtml_function_coverage=1 00:28:26.036 --rc genhtml_legend=1 00:28:26.036 --rc geninfo_all_blocks=1 00:28:26.036 --rc geninfo_unexecuted_blocks=1 00:28:26.036 00:28:26.036 ' 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:26.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.036 --rc genhtml_branch_coverage=1 00:28:26.036 --rc genhtml_function_coverage=1 00:28:26.036 --rc genhtml_legend=1 00:28:26.036 --rc geninfo_all_blocks=1 00:28:26.036 --rc geninfo_unexecuted_blocks=1 00:28:26.036 00:28:26.036 ' 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:26.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.036 --rc genhtml_branch_coverage=1 00:28:26.036 --rc genhtml_function_coverage=1 00:28:26.036 --rc genhtml_legend=1 00:28:26.036 --rc geninfo_all_blocks=1 00:28:26.036 --rc geninfo_unexecuted_blocks=1 00:28:26.036 00:28:26.036 ' 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.036 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:28:26.037 11:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:27.941 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:27.941 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:27.941 Found net devices under 0000:09:00.0: cvl_0_0 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:27.941 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:27.942 Found net devices under 0000:09:00.1: cvl_0_1 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:27.942 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.200 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.200 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.200 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:28.200 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:28.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:28:28.201 00:28:28.201 --- 10.0.0.2 ping statistics --- 00:28:28.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.201 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:28:28.201 00:28:28.201 --- 10.0.0.1 ping statistics --- 00:28:28.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.201 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3071399 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3071399 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3071399 ']' 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.201 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.201 [2024-11-15 11:47:08.464936] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:28.201 [2024-11-15 11:47:08.465991] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:28:28.201 [2024-11-15 11:47:08.466045] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.201 [2024-11-15 11:47:08.542240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.201 [2024-11-15 11:47:08.597742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.201 [2024-11-15 11:47:08.597794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.201 [2024-11-15 11:47:08.597822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.201 [2024-11-15 11:47:08.597833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.201 [2024-11-15 11:47:08.597843] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.201 [2024-11-15 11:47:08.598450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.460 [2024-11-15 11:47:08.683594] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:28.460 [2024-11-15 11:47:08.683894] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.460 [2024-11-15 11:47:08.731034] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.460 Malloc0 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.460 [2024-11-15 11:47:08.791156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3071419 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3071419 /var/tmp/bdevperf.sock 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3071419 ']' 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:28.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.460 11:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.460 [2024-11-15 11:47:08.837365] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:28:28.460 [2024-11-15 11:47:08.837440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071419 ] 00:28:28.719 [2024-11-15 11:47:08.902862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.719 [2024-11-15 11:47:08.961761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.719 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.719 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:28:28.719 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:28.719 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.719 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:28.977 NVMe0n1 00:28:28.977 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.977 11:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:28.977 Running I/O for 10 seconds... 00:28:31.288 8192.00 IOPS, 32.00 MiB/s [2024-11-15T10:47:12.650Z] 8407.00 IOPS, 32.84 MiB/s [2024-11-15T10:47:13.600Z] 8533.33 IOPS, 33.33 MiB/s [2024-11-15T10:47:14.535Z] 8453.25 IOPS, 33.02 MiB/s [2024-11-15T10:47:15.470Z] 8581.40 IOPS, 33.52 MiB/s [2024-11-15T10:47:16.404Z] 8537.17 IOPS, 33.35 MiB/s [2024-11-15T10:47:17.779Z] 8612.14 IOPS, 33.64 MiB/s [2024-11-15T10:47:18.714Z] 8601.25 IOPS, 33.60 MiB/s [2024-11-15T10:47:19.649Z] 8648.22 IOPS, 33.78 MiB/s [2024-11-15T10:47:19.649Z] 8660.80 IOPS, 33.83 MiB/s 00:28:39.222 Latency(us) 00:28:39.222 [2024-11-15T10:47:19.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.222 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:28:39.222 Verification LBA range: start 0x0 length 0x4000 00:28:39.222 NVMe0n1 : 10.08 8688.96 33.94 0.00 0.00 117285.14 20971.52 69516.71 00:28:39.222 [2024-11-15T10:47:19.649Z] =================================================================================================================== 00:28:39.222 [2024-11-15T10:47:19.649Z] Total : 8688.96 33.94 0.00 0.00 117285.14 20971.52 69516.71 00:28:39.222 { 00:28:39.222 "results": [ 00:28:39.222 { 00:28:39.222 "job": "NVMe0n1", 00:28:39.222 "core_mask": "0x1", 00:28:39.222 "workload": "verify", 00:28:39.222 "status": "finished", 00:28:39.222 "verify_range": { 00:28:39.222 "start": 0, 00:28:39.222 "length": 16384 00:28:39.222 }, 00:28:39.222 "queue_depth": 1024, 00:28:39.222 "io_size": 4096, 00:28:39.222 "runtime": 10.084525, 00:28:39.222 "iops": 8688.956594385952, 00:28:39.222 "mibps": 33.94123669682013, 00:28:39.222 "io_failed": 0, 00:28:39.222 "io_timeout": 0, 00:28:39.222 "avg_latency_us": 117285.14136860863, 00:28:39.222 "min_latency_us": 20971.52, 00:28:39.222 "max_latency_us": 69516.70518518519 00:28:39.222 } 00:28:39.222 ], 00:28:39.222 "core_count": 1 00:28:39.222 } 00:28:39.222 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3071419 00:28:39.222 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3071419 ']' 00:28:39.222 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3071419 00:28:39.222 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:28:39.222 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.222 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3071419 00:28:39.222 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:39.222 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:39.222 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3071419' 00:28:39.222 killing process with pid 3071419 00:28:39.222 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3071419 00:28:39.222 Received shutdown signal, test time was about 10.000000 seconds 00:28:39.222 00:28:39.222 Latency(us) 00:28:39.222 [2024-11-15T10:47:19.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.222 [2024-11-15T10:47:19.649Z] =================================================================================================================== 00:28:39.222 [2024-11-15T10:47:19.649Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:39.222 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3071419 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:39.481 rmmod nvme_tcp 00:28:39.481 rmmod nvme_fabrics 00:28:39.481 rmmod nvme_keyring 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3071399 ']' 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3071399 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3071399 ']' 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3071399 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3071399 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3071399' 00:28:39.481 killing process with pid 3071399 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3071399 00:28:39.481 11:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3071399 00:28:39.741 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:39.741 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:39.741 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:39.741 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:28:39.741 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:28:39.741 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:39.741 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:28:39.741 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:39.741 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:39.741 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.741 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.741 11:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.274 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:42.274 00:28:42.274 real 0m16.256s 00:28:42.274 user 0m22.315s 00:28:42.274 sys 0m3.481s 00:28:42.274 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:42.274 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:42.274 ************************************ 00:28:42.274 END TEST nvmf_queue_depth 00:28:42.274 ************************************ 00:28:42.274 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:42.274 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:42.275 ************************************ 00:28:42.275 START TEST nvmf_target_multipath 00:28:42.275 ************************************ 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:42.275 * Looking for test storage... 00:28:42.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:42.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.275 --rc genhtml_branch_coverage=1 00:28:42.275 --rc genhtml_function_coverage=1 00:28:42.275 --rc genhtml_legend=1 00:28:42.275 --rc geninfo_all_blocks=1 00:28:42.275 --rc geninfo_unexecuted_blocks=1 00:28:42.275 00:28:42.275 ' 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:42.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.275 --rc genhtml_branch_coverage=1 00:28:42.275 --rc genhtml_function_coverage=1 00:28:42.275 --rc genhtml_legend=1 00:28:42.275 --rc geninfo_all_blocks=1 00:28:42.275 --rc geninfo_unexecuted_blocks=1 00:28:42.275 00:28:42.275 ' 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:42.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.275 --rc genhtml_branch_coverage=1 00:28:42.275 --rc genhtml_function_coverage=1 00:28:42.275 --rc genhtml_legend=1 00:28:42.275 --rc geninfo_all_blocks=1 00:28:42.275 --rc geninfo_unexecuted_blocks=1 00:28:42.275 00:28:42.275 ' 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:42.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:42.275 --rc genhtml_branch_coverage=1 00:28:42.275 --rc genhtml_function_coverage=1 00:28:42.275 --rc genhtml_legend=1 00:28:42.275 --rc geninfo_all_blocks=1 00:28:42.275 --rc geninfo_unexecuted_blocks=1 00:28:42.275 00:28:42.275 ' 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.275 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:28:42.276 11:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:44.180 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:44.180 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:44.180 Found net devices under 0000:09:00.0: cvl_0_0 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:44.180 Found net devices under 0000:09:00.1: cvl_0_1 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:44.180 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:44.181 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:44.181 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:44.181 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:44.181 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:44.181 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:44.181 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:44.181 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:44.181 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:44.181 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:44.181 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:44.181 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:44.181 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:44.181 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:44.181 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:44.181 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:44.181 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:44.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:44.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:28:44.440 00:28:44.440 --- 10.0.0.2 ping statistics --- 00:28:44.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.440 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:44.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:44.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:28:44.440 00:28:44.440 --- 10.0.0.1 ping statistics --- 00:28:44.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.440 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:28:44.440 only one NIC for nvmf test 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:44.440 rmmod nvme_tcp 00:28:44.440 rmmod nvme_fabrics 00:28:44.440 rmmod nvme_keyring 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.440 11:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:46.974 00:28:46.974 real 0m4.620s 00:28:46.974 user 0m0.941s 00:28:46.974 sys 0m1.696s 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:46.974 ************************************ 00:28:46.974 END TEST nvmf_target_multipath 00:28:46.974 ************************************ 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:46.974 ************************************ 00:28:46.974 START TEST nvmf_zcopy 00:28:46.974 ************************************ 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:46.974 * Looking for test storage... 00:28:46.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:28:46.974 11:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:46.974 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:46.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.975 --rc genhtml_branch_coverage=1 00:28:46.975 --rc genhtml_function_coverage=1 00:28:46.975 --rc genhtml_legend=1 00:28:46.975 --rc geninfo_all_blocks=1 00:28:46.975 --rc geninfo_unexecuted_blocks=1 00:28:46.975 00:28:46.975 ' 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:46.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.975 --rc genhtml_branch_coverage=1 00:28:46.975 --rc genhtml_function_coverage=1 00:28:46.975 --rc genhtml_legend=1 00:28:46.975 --rc geninfo_all_blocks=1 00:28:46.975 --rc geninfo_unexecuted_blocks=1 00:28:46.975 00:28:46.975 ' 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:46.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.975 --rc genhtml_branch_coverage=1 00:28:46.975 --rc genhtml_function_coverage=1 00:28:46.975 --rc genhtml_legend=1 00:28:46.975 --rc geninfo_all_blocks=1 00:28:46.975 --rc geninfo_unexecuted_blocks=1 00:28:46.975 00:28:46.975 ' 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:46.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.975 --rc genhtml_branch_coverage=1 00:28:46.975 --rc genhtml_function_coverage=1 00:28:46.975 --rc genhtml_legend=1 00:28:46.975 --rc geninfo_all_blocks=1 00:28:46.975 --rc geninfo_unexecuted_blocks=1 00:28:46.975 00:28:46.975 ' 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:28:46.975 11:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:48.935 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:48.935 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:48.935 Found net devices under 0000:09:00.0: cvl_0_0 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:48.935 Found net devices under 0000:09:00.1: cvl_0_1 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:48.935 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:48.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:28:48.936 00:28:48.936 --- 10.0.0.2 ping statistics --- 00:28:48.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.936 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:28:48.936 00:28:48.936 --- 10.0.0.1 ping statistics --- 00:28:48.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.936 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3076600 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3076600 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3076600 ']' 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.936 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:48.936 [2024-11-15 11:47:29.260227] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:48.936 [2024-11-15 11:47:29.261315] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:28:48.936 [2024-11-15 11:47:29.261385] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.936 [2024-11-15 11:47:29.332504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.194 [2024-11-15 11:47:29.390648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:49.194 [2024-11-15 11:47:29.390696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:49.194 [2024-11-15 11:47:29.390724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:49.194 [2024-11-15 11:47:29.390735] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:49.194 [2024-11-15 11:47:29.390745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:49.194 [2024-11-15 11:47:29.391340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.194 [2024-11-15 11:47:29.478022] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:49.194 [2024-11-15 11:47:29.478343] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:49.194 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.194 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:28:49.194 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:49.194 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:49.194 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:49.194 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.194 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:49.195 [2024-11-15 11:47:29.527944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:49.195 [2024-11-15 11:47:29.544072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:49.195 malloc0 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:49.195 { 00:28:49.195 "params": { 00:28:49.195 "name": "Nvme$subsystem", 00:28:49.195 "trtype": "$TEST_TRANSPORT", 00:28:49.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:49.195 "adrfam": "ipv4", 00:28:49.195 "trsvcid": "$NVMF_PORT", 00:28:49.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:49.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:49.195 "hdgst": ${hdgst:-false}, 00:28:49.195 "ddgst": ${ddgst:-false} 00:28:49.195 }, 00:28:49.195 "method": "bdev_nvme_attach_controller" 00:28:49.195 } 00:28:49.195 EOF 00:28:49.195 )") 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:28:49.195 11:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:49.195 "params": { 00:28:49.195 "name": "Nvme1", 00:28:49.195 "trtype": "tcp", 00:28:49.195 "traddr": "10.0.0.2", 00:28:49.195 "adrfam": "ipv4", 00:28:49.195 "trsvcid": "4420", 00:28:49.195 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:49.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:49.195 "hdgst": false, 00:28:49.195 "ddgst": false 00:28:49.195 }, 00:28:49.195 "method": "bdev_nvme_attach_controller" 00:28:49.195 }' 00:28:49.453 [2024-11-15 11:47:29.631998] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:28:49.453 [2024-11-15 11:47:29.632073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076626 ] 00:28:49.453 [2024-11-15 11:47:29.701435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.453 [2024-11-15 11:47:29.759797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.711 Running I/O for 10 seconds... 00:28:52.018 5616.00 IOPS, 43.88 MiB/s [2024-11-15T10:47:33.377Z] 5645.00 IOPS, 44.10 MiB/s [2024-11-15T10:47:34.311Z] 5657.33 IOPS, 44.20 MiB/s [2024-11-15T10:47:35.245Z] 5661.50 IOPS, 44.23 MiB/s [2024-11-15T10:47:36.180Z] 5665.00 IOPS, 44.26 MiB/s [2024-11-15T10:47:37.554Z] 5672.00 IOPS, 44.31 MiB/s [2024-11-15T10:47:38.487Z] 5675.86 IOPS, 44.34 MiB/s [2024-11-15T10:47:39.421Z] 5673.50 IOPS, 44.32 MiB/s [2024-11-15T10:47:40.356Z] 5673.67 IOPS, 44.33 MiB/s [2024-11-15T10:47:40.356Z] 5674.30 IOPS, 44.33 MiB/s 00:28:59.929 Latency(us) 00:28:59.929 [2024-11-15T10:47:40.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.929 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:28:59.929 Verification LBA range: start 0x0 length 0x1000 00:28:59.929 Nvme1n1 : 10.01 5675.87 44.34 0.00 0.00 22476.78 3228.25 31845.64 00:28:59.929 [2024-11-15T10:47:40.356Z] =================================================================================================================== 00:28:59.929 [2024-11-15T10:47:40.356Z] Total : 5675.87 44.34 0.00 0.00 22476.78 3228.25 31845.64 00:29:00.187 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3077920 00:29:00.187 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:29:00.187 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:00.187 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:29:00.187 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:29:00.187 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:29:00.187 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:29:00.187 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.187 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.187 { 00:29:00.187 "params": { 00:29:00.188 "name": "Nvme$subsystem", 00:29:00.188 "trtype": "$TEST_TRANSPORT", 00:29:00.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.188 "adrfam": "ipv4", 00:29:00.188 "trsvcid": "$NVMF_PORT", 00:29:00.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.188 "hdgst": ${hdgst:-false}, 00:29:00.188 "ddgst": ${ddgst:-false} 00:29:00.188 }, 00:29:00.188 "method": "bdev_nvme_attach_controller" 00:29:00.188 } 00:29:00.188 EOF 00:29:00.188 )") 00:29:00.188 [2024-11-15 11:47:40.359864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.359899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:29:00.188 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:29:00.188 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:29:00.188 11:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:00.188 "params": { 00:29:00.188 "name": "Nvme1", 00:29:00.188 "trtype": "tcp", 00:29:00.188 "traddr": "10.0.0.2", 00:29:00.188 "adrfam": "ipv4", 00:29:00.188 "trsvcid": "4420", 00:29:00.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:00.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:00.188 "hdgst": false, 00:29:00.188 "ddgst": false 00:29:00.188 }, 00:29:00.188 "method": "bdev_nvme_attach_controller" 00:29:00.188 }' 00:29:00.188 [2024-11-15 11:47:40.367800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.367822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.375799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.375818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.383797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.383816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.391799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.391817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.399798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.399816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.401091] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:29:00.188 [2024-11-15 11:47:40.401161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077920 ] 00:29:00.188 [2024-11-15 11:47:40.407798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.407818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.415797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.415816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.423798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.423816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.431813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.431832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.439798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.439816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.447799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.447818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.455798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.455816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.463797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.463816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.470077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.188 [2024-11-15 11:47:40.471798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.471817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.479837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.479867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.487825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.487852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.495798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.495817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.503798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.503817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.511797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.511815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.519799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.519818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.527798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.527817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.530281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.188 [2024-11-15 11:47:40.535797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.535816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.543805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.543825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.551829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.551859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.559826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.559853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.567830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.567861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.575833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.575866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.583833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.583866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.591834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.591867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.599801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.599820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.188 [2024-11-15 11:47:40.607883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.188 [2024-11-15 11:47:40.607921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.447 [2024-11-15 11:47:40.615905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.447 [2024-11-15 11:47:40.615939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.447 [2024-11-15 11:47:40.623822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.447 [2024-11-15 11:47:40.623849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.447 [2024-11-15 11:47:40.631799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.447 [2024-11-15 11:47:40.631818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.447 [2024-11-15 11:47:40.639797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.447 [2024-11-15 11:47:40.639817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.447 [2024-11-15 11:47:40.647805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.447 [2024-11-15 11:47:40.647844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.447 [2024-11-15 11:47:40.655803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.447 [2024-11-15 11:47:40.655824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.447 [2024-11-15 11:47:40.663803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.447 [2024-11-15 11:47:40.663824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.447 [2024-11-15 11:47:40.671803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.447 [2024-11-15 11:47:40.671824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.447 [2024-11-15 11:47:40.679798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.679818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.687798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.687817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.695796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.695815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.703797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.703816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.711802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.711823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.719802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.719822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.727799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.727820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.735798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.735817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.743797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.743816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.751797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.751815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.759796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.759815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.767802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.767823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.775798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.775817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.783797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.783816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.791797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.791816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.799797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.799816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.807801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.807821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.815799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.815819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.823798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.823817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.831798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.831816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.839798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.839817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.847812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.847830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.855798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.855817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.863805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.863828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.448 [2024-11-15 11:47:40.871841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.448 [2024-11-15 11:47:40.871887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.706 Running I/O for 5 seconds... 00:29:00.706 [2024-11-15 11:47:40.888994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.706 [2024-11-15 11:47:40.889020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.706 [2024-11-15 11:47:40.905594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.706 [2024-11-15 11:47:40.905623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.706 [2024-11-15 11:47:40.923472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.706 [2024-11-15 11:47:40.923499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.707 [2024-11-15 11:47:40.933629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.707 [2024-11-15 11:47:40.933654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.707 [2024-11-15 11:47:40.945678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.707 [2024-11-15 11:47:40.945703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.707 [2024-11-15 11:47:40.960898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.707 [2024-11-15 11:47:40.960924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.707 [2024-11-15 11:47:40.970687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.707 [2024-11-15 11:47:40.970712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.707 [2024-11-15 11:47:40.985263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.707 [2024-11-15 11:47:40.985309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.707 [2024-11-15 11:47:40.994793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.707 [2024-11-15 11:47:40.994818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.707 [2024-11-15 11:47:41.009234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.707 [2024-11-15 11:47:41.009258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.707 [2024-11-15 11:47:41.019192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.707 [2024-11-15 11:47:41.019217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.707 [2024-11-15 11:47:41.031113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.707 [2024-11-15 11:47:41.031138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.707 [2024-11-15 11:47:41.046026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.707 [2024-11-15 11:47:41.046053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.707 [2024-11-15 11:47:41.062047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.707 [2024-11-15 11:47:41.062094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.707 [2024-11-15 11:47:41.071580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.707 [2024-11-15 11:47:41.071622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.707 [2024-11-15 11:47:41.083359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.707 [2024-11-15 11:47:41.083385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.707 [2024-11-15 11:47:41.096751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.707 [2024-11-15 11:47:41.096785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.707 [2024-11-15 11:47:41.106143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.707 [2024-11-15 11:47:41.106173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.707 [2024-11-15 11:47:41.122001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.707 [2024-11-15 11:47:41.122025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.965 [2024-11-15 11:47:41.138208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.965 [2024-11-15 11:47:41.138234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.965 [2024-11-15 11:47:41.154191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.965 [2024-11-15 11:47:41.154218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.965 [2024-11-15 11:47:41.169719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.965 [2024-11-15 11:47:41.169756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.965 [2024-11-15 11:47:41.179248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.965 [2024-11-15 11:47:41.179274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.965 [2024-11-15 11:47:41.191532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.966 [2024-11-15 11:47:41.191558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.966 [2024-11-15 11:47:41.204069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.966 [2024-11-15 11:47:41.204096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.966 [2024-11-15 11:47:41.213878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.966 [2024-11-15 11:47:41.213904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.966 [2024-11-15 11:47:41.225822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.966 [2024-11-15 11:47:41.225846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.966 [2024-11-15 11:47:41.240904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.966 [2024-11-15 11:47:41.240928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.966 [2024-11-15 11:47:41.250451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.966 [2024-11-15 11:47:41.250477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.966 [2024-11-15 11:47:41.264684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.966 [2024-11-15 11:47:41.264709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.966 [2024-11-15 11:47:41.274335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.966 [2024-11-15 11:47:41.274376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.966 [2024-11-15 11:47:41.286432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.966 [2024-11-15 11:47:41.286458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.966 [2024-11-15 11:47:41.302325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.966 [2024-11-15 11:47:41.302373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.966 [2024-11-15 11:47:41.319754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.966 [2024-11-15 11:47:41.319781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.966 [2024-11-15 11:47:41.329377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.966 [2024-11-15 11:47:41.329404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.966 [2024-11-15 11:47:41.341309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.966 [2024-11-15 11:47:41.341337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.966 [2024-11-15 11:47:41.356879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.966 [2024-11-15 11:47:41.356920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.966 [2024-11-15 11:47:41.366178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.966 [2024-11-15 11:47:41.366204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.966 [2024-11-15 11:47:41.380592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.966 [2024-11-15 11:47:41.380633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.966 [2024-11-15 11:47:41.389994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.966 [2024-11-15 11:47:41.390028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.224 [2024-11-15 11:47:41.404630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.224 [2024-11-15 11:47:41.404655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.224 [2024-11-15 11:47:41.414268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.224 [2024-11-15 11:47:41.414319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.224 [2024-11-15 11:47:41.430406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.224 [2024-11-15 11:47:41.430433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.224 [2024-11-15 11:47:41.445455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.224 [2024-11-15 11:47:41.445482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.224 [2024-11-15 11:47:41.454749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.224 [2024-11-15 11:47:41.454773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.224 [2024-11-15 11:47:41.470568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.224 [2024-11-15 11:47:41.470593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.224 [2024-11-15 11:47:41.486023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.224 [2024-11-15 11:47:41.486062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.225 [2024-11-15 11:47:41.503367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.225 [2024-11-15 11:47:41.503391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.225 [2024-11-15 11:47:41.513006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.225 [2024-11-15 11:47:41.513031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.225 [2024-11-15 11:47:41.524570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.225 [2024-11-15 11:47:41.524609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.225 [2024-11-15 11:47:41.535528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.225 [2024-11-15 11:47:41.535553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.225 [2024-11-15 11:47:41.547991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.225 [2024-11-15 11:47:41.548027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.225 [2024-11-15 11:47:41.558010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.225 [2024-11-15 11:47:41.558035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.225 [2024-11-15 11:47:41.569927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.225 [2024-11-15 11:47:41.569952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.225 [2024-11-15 11:47:41.585613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.225 [2024-11-15 11:47:41.585640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.225 [2024-11-15 11:47:41.594890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.225 [2024-11-15 11:47:41.594915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.225 [2024-11-15 11:47:41.610666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.225 [2024-11-15 11:47:41.610689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.225 [2024-11-15 11:47:41.626080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.225 [2024-11-15 11:47:41.626121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.225 [2024-11-15 11:47:41.643433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.225 [2024-11-15 11:47:41.643458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.483 [2024-11-15 11:47:41.653240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.483 [2024-11-15 11:47:41.653267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.483 [2024-11-15 11:47:41.669467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.483 [2024-11-15 11:47:41.669494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.483 [2024-11-15 11:47:41.679489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.679515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 [2024-11-15 11:47:41.691335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.691361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 [2024-11-15 11:47:41.702084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.702109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 [2024-11-15 11:47:41.717685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.717710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 [2024-11-15 11:47:41.736189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.736215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 [2024-11-15 11:47:41.746187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.746212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 [2024-11-15 11:47:41.762578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.762618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 [2024-11-15 11:47:41.777196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.777222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 [2024-11-15 11:47:41.786135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.786159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 [2024-11-15 11:47:41.799996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.800027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 [2024-11-15 11:47:41.809407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.809434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 [2024-11-15 11:47:41.821363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.821388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 [2024-11-15 11:47:41.837137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.837175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 [2024-11-15 11:47:41.846349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.846373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 [2024-11-15 11:47:41.860472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.860512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 [2024-11-15 11:47:41.871429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.871454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 11621.00 IOPS, 90.79 MiB/s [2024-11-15T10:47:41.911Z] [2024-11-15 11:47:41.885615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.885641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 [2024-11-15 11:47:41.895084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.895108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.484 [2024-11-15 11:47:41.906949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.484 [2024-11-15 11:47:41.906986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:41.918035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:41.918059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:41.932235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:41.932260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:41.941435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:41.941461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:41.953437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:41.953464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:41.970075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:41.970116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:41.986251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:41.986277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:42.002209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:42.002248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:42.018060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:42.018086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:42.027828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:42.027852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:42.039840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:42.039864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:42.051471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:42.051497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:42.065561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:42.065601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:42.075197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:42.075222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:42.087352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:42.087378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:42.102212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:42.102251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:42.118224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:42.118248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:42.134042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:42.134068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:42.152153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:42.152178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.742 [2024-11-15 11:47:42.162326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.742 [2024-11-15 11:47:42.162352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.178708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.178734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.193085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.193111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.202676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.202701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.216854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.216894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.226488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.226514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.241911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.241936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.260457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.260485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.271390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.271416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.282071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.282102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.297665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.297690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.307360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.307394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.319553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.319592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.330699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.330723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.344897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.344937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.354248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.354273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.365865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.365888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.380568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.380609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.389400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.389425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.401153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.401177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.411732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.411756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.001 [2024-11-15 11:47:42.422415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.001 [2024-11-15 11:47:42.422441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.259 [2024-11-15 11:47:42.437778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.259 [2024-11-15 11:47:42.437819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.259 [2024-11-15 11:47:42.447003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.259 [2024-11-15 11:47:42.447027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.259 [2024-11-15 11:47:42.463675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.259 [2024-11-15 11:47:42.463714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.259 [2024-11-15 11:47:42.474428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.259 [2024-11-15 11:47:42.474453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.259 [2024-11-15 11:47:42.488379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.259 [2024-11-15 11:47:42.488405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.259 [2024-11-15 11:47:42.497712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.259 [2024-11-15 11:47:42.497737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.259 [2024-11-15 11:47:42.509851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.259 [2024-11-15 11:47:42.509883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.259 [2024-11-15 11:47:42.525888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.259 [2024-11-15 11:47:42.525913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.259 [2024-11-15 11:47:42.535374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.259 [2024-11-15 11:47:42.535414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.259 [2024-11-15 11:47:42.546907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.259 [2024-11-15 11:47:42.546931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.259 [2024-11-15 11:47:42.561516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.259 [2024-11-15 11:47:42.561543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.260 [2024-11-15 11:47:42.570721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.260 [2024-11-15 11:47:42.570745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.260 [2024-11-15 11:47:42.582494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.260 [2024-11-15 11:47:42.582519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.260 [2024-11-15 11:47:42.598189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.260 [2024-11-15 11:47:42.598214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.260 [2024-11-15 11:47:42.614105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.260 [2024-11-15 11:47:42.614131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.260 [2024-11-15 11:47:42.623660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.260 [2024-11-15 11:47:42.623686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.260 [2024-11-15 11:47:42.635772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.260 [2024-11-15 11:47:42.635796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.260 [2024-11-15 11:47:42.646678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.260 [2024-11-15 11:47:42.646702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.260 [2024-11-15 11:47:42.661298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.260 [2024-11-15 11:47:42.661345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.260 [2024-11-15 11:47:42.671015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.260 [2024-11-15 11:47:42.671039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.685849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.685876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.695664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.695689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.707737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.707761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.718683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.718707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.733657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.733682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.742975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.743007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.757456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.757483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.766726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.766765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.780479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.780505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.789985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.790011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.805553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.805594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.814636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.814660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.828185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.828224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.837976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.837999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.849630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.849668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.865738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.865776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 11634.00 IOPS, 90.89 MiB/s [2024-11-15T10:47:42.945Z] [2024-11-15 11:47:42.884580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.884620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.894120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.894145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.909813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.909852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.928092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.928117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.518 [2024-11-15 11:47:42.938261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.518 [2024-11-15 11:47:42.938312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.777 [2024-11-15 11:47:42.952130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.777 [2024-11-15 11:47:42.952156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.777 [2024-11-15 11:47:42.961616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.777 [2024-11-15 11:47:42.961656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.777 [2024-11-15 11:47:42.973238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.777 [2024-11-15 11:47:42.973263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.777 [2024-11-15 11:47:42.983647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.777 [2024-11-15 11:47:42.983679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.777 [2024-11-15 11:47:42.994740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.777 [2024-11-15 11:47:42.994765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.777 [2024-11-15 11:47:43.005398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.777 [2024-11-15 11:47:43.005438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.777 [2024-11-15 11:47:43.021845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.777 [2024-11-15 11:47:43.021869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.777 [2024-11-15 11:47:43.039233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.777 [2024-11-15 11:47:43.039257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.777 [2024-11-15 11:47:43.048918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.777 [2024-11-15 11:47:43.048941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.777 [2024-11-15 11:47:43.061184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.777 [2024-11-15 11:47:43.061225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.777 [2024-11-15 11:47:43.076778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.777 [2024-11-15 11:47:43.076802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.777 [2024-11-15 11:47:43.085872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.777 [2024-11-15 11:47:43.085896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.777 [2024-11-15 11:47:43.097912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.777 [2024-11-15 11:47:43.097936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.777 [2024-11-15 11:47:43.114034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.777 [2024-11-15 11:47:43.114059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.777 [2024-11-15 11:47:43.130017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.777 [2024-11-15 11:47:43.130043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.777 [2024-11-15 11:47:43.139662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.777 [2024-11-15 11:47:43.139686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.777 [2024-11-15 11:47:43.151794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.777 [2024-11-15 11:47:43.151820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.778 [2024-11-15 11:47:43.162351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.778 [2024-11-15 11:47:43.162392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.778 [2024-11-15 11:47:43.178538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.778 [2024-11-15 11:47:43.178564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.778 [2024-11-15 11:47:43.194173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.778 [2024-11-15 11:47:43.194212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.036 [2024-11-15 11:47:43.211609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.036 [2024-11-15 11:47:43.211635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.036 [2024-11-15 11:47:43.221080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.036 [2024-11-15 11:47:43.221106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.036 [2024-11-15 11:47:43.236631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.036 [2024-11-15 11:47:43.236666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.036 [2024-11-15 11:47:43.246150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.036 [2024-11-15 11:47:43.246175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.036 [2024-11-15 11:47:43.257819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.036 [2024-11-15 11:47:43.257844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.036 [2024-11-15 11:47:43.268503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.036 [2024-11-15 11:47:43.268528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.036 [2024-11-15 11:47:43.279763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.036 [2024-11-15 11:47:43.279787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.036 [2024-11-15 11:47:43.290164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.036 [2024-11-15 11:47:43.290204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.036 [2024-11-15 11:47:43.305211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.036 [2024-11-15 11:47:43.305254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.036 [2024-11-15 11:47:43.314620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.036 [2024-11-15 11:47:43.314646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.036 [2024-11-15 11:47:43.329129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.036 [2024-11-15 11:47:43.329157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.036 [2024-11-15 11:47:43.338397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.036 [2024-11-15 11:47:43.338425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.036 [2024-11-15 11:47:43.352407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.036 [2024-11-15 11:47:43.352433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.036 [2024-11-15 11:47:43.361976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.036 [2024-11-15 11:47:43.362002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.036 [2024-11-15 11:47:43.376447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.037 [2024-11-15 11:47:43.376474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.037 [2024-11-15 11:47:43.385982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.037 [2024-11-15 11:47:43.386021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.037 [2024-11-15 11:47:43.398066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.037 [2024-11-15 11:47:43.398090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.037 [2024-11-15 11:47:43.412498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.037 [2024-11-15 11:47:43.412525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.037 [2024-11-15 11:47:43.421944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.037 [2024-11-15 11:47:43.421970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.037 [2024-11-15 11:47:43.433349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.037 [2024-11-15 11:47:43.433389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.037 [2024-11-15 11:47:43.443207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.037 [2024-11-15 11:47:43.443230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.037 [2024-11-15 11:47:43.454878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.037 [2024-11-15 11:47:43.454902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.468753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.468780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.478427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.478454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.493332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.493358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.502690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.502713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.517933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.517958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.527247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.527272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.538981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.539005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.549501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.549527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.561080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.561104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.572079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.572103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.583172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.583197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.594308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.594346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.610084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.610110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.619338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.619364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.631082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.631107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.642275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.642300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.657845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.657870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.675692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.675735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.686273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.686320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.699832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.699857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.295 [2024-11-15 11:47:43.708858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.295 [2024-11-15 11:47:43.708882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.720630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.720671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.731720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.731746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.742736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.742761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.757553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.757580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.766953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.766978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.782249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.782274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.798237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.798278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.815987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.816011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.826697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.826721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.840346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.840372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.850185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.850209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.864531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.864556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.873682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.873706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 11656.33 IOPS, 91.07 MiB/s [2024-11-15T10:47:43.981Z] [2024-11-15 11:47:43.885577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.885603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.901901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.901927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.911688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.911722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.923632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.923656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.934535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.934560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.947736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.947763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.957767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.957791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.554 [2024-11-15 11:47:43.969760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.554 [2024-11-15 11:47:43.969785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:43.985606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:43.985632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:43.994914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:43.994939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.009600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.009641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.019684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.019709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.032010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.032048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.043114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.043139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.054113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.054152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.068062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.068087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.078125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.078149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.090324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.090363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.104493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.104519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.113837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.113861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.125575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.125615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.140340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.140391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.149543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.149568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.161081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.161107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.171527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.171553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.185344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.185384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.194791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.194815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.209095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.209119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.218933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.218957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:03.813 [2024-11-15 11:47:44.234249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:03.813 [2024-11-15 11:47:44.234287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.249625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.249652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.259541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.259581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.271393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.271418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.285618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.285659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.294886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.294910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.310170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.310195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.319731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.319756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.331910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.331935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.342858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.342896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.358428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.358453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.373968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.374001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.383449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.383474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.395388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.395413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.406383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.406408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.421760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.421784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.431145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.431172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.443022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.443046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.453700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.453725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.469355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.469383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.478830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.478855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.072 [2024-11-15 11:47:44.493437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.072 [2024-11-15 11:47:44.493465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.510419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.510448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.525550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.525578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.535380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.535407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.547440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.547481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.558641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.558678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.571692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.571718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.581205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.581229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.593397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.593439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.609026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.609061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.618923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.618949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.632373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.632400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.641872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.641898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.657618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.657658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.667442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.667467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.678908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.678932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.689575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.689616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.704682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.704706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.714490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.714515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.729538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.729563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.738581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.738619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.330 [2024-11-15 11:47:44.754374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.330 [2024-11-15 11:47:44.754405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.767053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.767081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.776764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.776788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.788814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.788839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.799377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.799403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.809714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.809737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.825137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.825160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.835011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.835035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.849945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.849969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.866276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.866300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.882197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.882238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 11639.50 IOPS, 90.93 MiB/s [2024-11-15T10:47:45.015Z] [2024-11-15 11:47:44.897702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.897742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.907708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.907734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.919474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.919501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.930238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.930276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.946264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.946310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.963745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.963770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.973656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.973680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.988885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.988910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.588 [2024-11-15 11:47:44.998425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.588 [2024-11-15 11:47:44.998452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.846 [2024-11-15 11:47:45.014191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.846 [2024-11-15 11:47:45.014219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.846 [2024-11-15 11:47:45.024148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.846 [2024-11-15 11:47:45.024174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.846 [2024-11-15 11:47:45.035771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.846 [2024-11-15 11:47:45.035798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.846 [2024-11-15 11:47:45.046227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.846 [2024-11-15 11:47:45.046253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.846 [2024-11-15 11:47:45.061883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.846 [2024-11-15 11:47:45.061909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.846 [2024-11-15 11:47:45.071641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.846 [2024-11-15 11:47:45.071681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.846 [2024-11-15 11:47:45.083733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.846 [2024-11-15 11:47:45.083758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.846 [2024-11-15 11:47:45.094553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.846 [2024-11-15 11:47:45.094580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.846 [2024-11-15 11:47:45.108560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.846 [2024-11-15 11:47:45.108587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.846 [2024-11-15 11:47:45.118186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.846 [2024-11-15 11:47:45.118210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.846 [2024-11-15 11:47:45.130187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.846 [2024-11-15 11:47:45.130227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.846 [2024-11-15 11:47:45.144902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.847 [2024-11-15 11:47:45.144929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.847 [2024-11-15 11:47:45.154492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.847 [2024-11-15 11:47:45.154517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.847 [2024-11-15 11:47:45.170259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.847 [2024-11-15 11:47:45.170298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.847 [2024-11-15 11:47:45.179918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.847 [2024-11-15 11:47:45.179942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.847 [2024-11-15 11:47:45.191694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.847 [2024-11-15 11:47:45.191720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.847 [2024-11-15 11:47:45.202195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.847 [2024-11-15 11:47:45.202221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.847 [2024-11-15 11:47:45.215679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.847 [2024-11-15 11:47:45.215705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.847 [2024-11-15 11:47:45.225346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.847 [2024-11-15 11:47:45.225371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.847 [2024-11-15 11:47:45.237352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.847 [2024-11-15 11:47:45.237377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.847 [2024-11-15 11:47:45.252922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.847 [2024-11-15 11:47:45.252947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:04.847 [2024-11-15 11:47:45.262042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:04.847 [2024-11-15 11:47:45.262066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.105 [2024-11-15 11:47:45.274464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.105 [2024-11-15 11:47:45.274492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.105 [2024-11-15 11:47:45.290875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.105 [2024-11-15 11:47:45.290901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.105 [2024-11-15 11:47:45.306261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.105 [2024-11-15 11:47:45.306301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.105 [2024-11-15 11:47:45.323554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.105 [2024-11-15 11:47:45.323579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.105 [2024-11-15 11:47:45.334565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.105 [2024-11-15 11:47:45.334609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.105 [2024-11-15 11:47:45.350739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.105 [2024-11-15 11:47:45.350766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.105 [2024-11-15 11:47:45.366288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.106 [2024-11-15 11:47:45.366323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.106 [2024-11-15 11:47:45.383814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.106 [2024-11-15 11:47:45.383838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.106 [2024-11-15 11:47:45.393197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.106 [2024-11-15 11:47:45.393221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.106 [2024-11-15 11:47:45.404785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.106 [2024-11-15 11:47:45.404810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.106 [2024-11-15 11:47:45.415101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.106 [2024-11-15 11:47:45.415125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.106 [2024-11-15 11:47:45.430195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.106 [2024-11-15 11:47:45.430221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.106 [2024-11-15 11:47:45.439646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.106 [2024-11-15 11:47:45.439673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.106 [2024-11-15 11:47:45.451272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.106 [2024-11-15 11:47:45.451298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.106 [2024-11-15 11:47:45.462335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.106 [2024-11-15 11:47:45.462361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.106 [2024-11-15 11:47:45.478094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.106 [2024-11-15 11:47:45.478119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.106 [2024-11-15 11:47:45.487045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.106 [2024-11-15 11:47:45.487073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.106 [2024-11-15 11:47:45.498967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.106 [2024-11-15 11:47:45.499007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.106 [2024-11-15 11:47:45.511984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.106 [2024-11-15 11:47:45.512010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.106 [2024-11-15 11:47:45.521162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.106 [2024-11-15 11:47:45.521187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.533277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.533313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.548860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.548894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.557932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.557972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.569650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.569675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.584179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.584207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.593443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.593470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.605112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.605138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.615957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.615996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.627237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.627264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.641778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.641804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.651441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.651469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.663231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.663255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.673879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.673904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.688153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.688181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.697682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.697706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.709473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.709501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.723544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.723571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.732898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.732944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.744635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.744675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.755040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.755064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.769500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.769539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.365 [2024-11-15 11:47:45.779030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.365 [2024-11-15 11:47:45.779069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:45.790809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.790835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:45.801950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.801975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:45.816841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.816866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:45.826418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.826445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:45.840572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.840599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:45.850086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.850111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:45.861739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.861763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:45.876863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.876889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:45.885873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.885911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 11656.60 IOPS, 91.07 MiB/s [2024-11-15T10:47:46.051Z] [2024-11-15 11:47:45.895850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.895876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:45.936536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.936561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 00:29:05.624 Latency(us) 00:29:05.624 [2024-11-15T10:47:46.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.624 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:29:05.624 Nvme1n1 : 5.05 11570.29 90.39 0.00 0.00 10962.88 2864.17 51263.72 00:29:05.624 [2024-11-15T10:47:46.051Z] =================================================================================================================== 00:29:05.624 [2024-11-15T10:47:46.051Z] Total : 11570.29 90.39 0.00 0.00 10962.88 2864.17 51263.72 00:29:05.624 [2024-11-15 11:47:45.943825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.943862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:45.951805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.951827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:45.959812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.959835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:45.967866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.967908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:45.975869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.975911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:45.983864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.983904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:45.991862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.991899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:45.999855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:45.999895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:46.007869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:46.007913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:46.015863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:46.015900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:46.023863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:46.023904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:46.031869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:46.031911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:46.039866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:46.039902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.624 [2024-11-15 11:47:46.047880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.624 [2024-11-15 11:47:46.047928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.883 [2024-11-15 11:47:46.055878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.883 [2024-11-15 11:47:46.055925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.883 [2024-11-15 11:47:46.063865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.883 [2024-11-15 11:47:46.063908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.883 [2024-11-15 11:47:46.071872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.883 [2024-11-15 11:47:46.071911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.883 [2024-11-15 11:47:46.079865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.883 [2024-11-15 11:47:46.079906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.883 [2024-11-15 11:47:46.087828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.883 [2024-11-15 11:47:46.087859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.883 [2024-11-15 11:47:46.095800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.883 [2024-11-15 11:47:46.095820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.883 [2024-11-15 11:47:46.103800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.883 [2024-11-15 11:47:46.103819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.883 [2024-11-15 11:47:46.111800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.883 [2024-11-15 11:47:46.111820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.883 [2024-11-15 11:47:46.127901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.883 [2024-11-15 11:47:46.127955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.883 [2024-11-15 11:47:46.135860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.883 [2024-11-15 11:47:46.135899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.883 [2024-11-15 11:47:46.143850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.883 [2024-11-15 11:47:46.143879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.883 [2024-11-15 11:47:46.151800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.883 [2024-11-15 11:47:46.151819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.883 [2024-11-15 11:47:46.159800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.883 [2024-11-15 11:47:46.159819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.883 [2024-11-15 11:47:46.167797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:05.883 [2024-11-15 11:47:46.167815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:05.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3077920) - No such process 00:29:05.883 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3077920 00:29:05.883 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.883 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.883 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:05.883 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.883 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:05.883 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.883 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:05.883 delay0 00:29:05.883 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.883 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:29:05.883 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.883 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:05.883 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.883 11:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:29:05.883 [2024-11-15 11:47:46.251773] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:13.992 Initializing NVMe Controllers 00:29:13.992 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:13.992 Initialization complete. Launching workers. 00:29:13.992 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3249 00:29:13.992 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3536, failed to submit 33 00:29:13.992 success 3465, unsuccessful 71, failed 0 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:13.992 rmmod nvme_tcp 00:29:13.992 rmmod nvme_fabrics 00:29:13.992 rmmod nvme_keyring 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3076600 ']' 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3076600 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3076600 ']' 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3076600 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3076600 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3076600' 00:29:13.992 killing process with pid 3076600 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3076600 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3076600 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.992 11:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:15.373 00:29:15.373 real 0m28.539s 00:29:15.373 user 0m41.109s 00:29:15.373 sys 0m9.480s 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:15.373 ************************************ 00:29:15.373 END TEST nvmf_zcopy 00:29:15.373 ************************************ 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:15.373 ************************************ 00:29:15.373 START TEST nvmf_nmic 00:29:15.373 ************************************ 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:15.373 * Looking for test storage... 00:29:15.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:29:15.373 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:15.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.374 --rc genhtml_branch_coverage=1 00:29:15.374 --rc genhtml_function_coverage=1 00:29:15.374 --rc genhtml_legend=1 00:29:15.374 --rc geninfo_all_blocks=1 00:29:15.374 --rc geninfo_unexecuted_blocks=1 00:29:15.374 00:29:15.374 ' 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:15.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.374 --rc genhtml_branch_coverage=1 00:29:15.374 --rc genhtml_function_coverage=1 00:29:15.374 --rc genhtml_legend=1 00:29:15.374 --rc geninfo_all_blocks=1 00:29:15.374 --rc geninfo_unexecuted_blocks=1 00:29:15.374 00:29:15.374 ' 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:15.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.374 --rc genhtml_branch_coverage=1 00:29:15.374 --rc genhtml_function_coverage=1 00:29:15.374 --rc genhtml_legend=1 00:29:15.374 --rc geninfo_all_blocks=1 00:29:15.374 --rc geninfo_unexecuted_blocks=1 00:29:15.374 00:29:15.374 ' 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:15.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.374 --rc genhtml_branch_coverage=1 00:29:15.374 --rc genhtml_function_coverage=1 00:29:15.374 --rc genhtml_legend=1 00:29:15.374 --rc geninfo_all_blocks=1 00:29:15.374 --rc geninfo_unexecuted_blocks=1 00:29:15.374 00:29:15.374 ' 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:29:15.374 11:47:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:17.908 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:17.908 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:17.908 Found net devices under 0000:09:00.0: cvl_0_0 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:17.908 Found net devices under 0000:09:00.1: cvl_0_1 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.908 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:17.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:29:17.909 00:29:17.909 --- 10.0.0.2 ping statistics --- 00:29:17.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.909 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:29:17.909 00:29:17.909 --- 10.0.0.1 ping statistics --- 00:29:17.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.909 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3081311 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3081311 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3081311 ']' 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.909 11:47:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.909 [2024-11-15 11:47:57.953785] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:17.909 [2024-11-15 11:47:57.954814] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:29:17.909 [2024-11-15 11:47:57.954869] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.909 [2024-11-15 11:47:58.027596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:17.909 [2024-11-15 11:47:58.086899] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.909 [2024-11-15 11:47:58.086950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.909 [2024-11-15 11:47:58.086978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.909 [2024-11-15 11:47:58.086989] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.909 [2024-11-15 11:47:58.086998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.909 [2024-11-15 11:47:58.088617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.909 [2024-11-15 11:47:58.088740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.909 [2024-11-15 11:47:58.088808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.909 [2024-11-15 11:47:58.088811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.909 [2024-11-15 11:47:58.175501] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:17.909 [2024-11-15 11:47:58.175700] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:17.909 [2024-11-15 11:47:58.176043] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:17.909 [2024-11-15 11:47:58.176677] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:17.909 [2024-11-15 11:47:58.176907] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.909 [2024-11-15 11:47:58.221483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.909 Malloc0 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.909 [2024-11-15 11:47:58.289652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:29:17.909 test case1: single bdev can't be used in multiple subsystems 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.909 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.909 [2024-11-15 11:47:58.313405] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:29:17.909 [2024-11-15 11:47:58.313435] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:29:17.909 [2024-11-15 11:47:58.313466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:17.909 request: 00:29:17.909 { 00:29:17.909 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:29:17.909 "namespace": { 00:29:17.909 "bdev_name": "Malloc0", 00:29:17.909 "no_auto_visible": false 00:29:17.909 }, 00:29:17.909 "method": "nvmf_subsystem_add_ns", 00:29:17.909 "req_id": 1 00:29:17.909 } 00:29:17.909 Got JSON-RPC error response 00:29:17.910 response: 00:29:17.910 { 00:29:17.910 "code": -32602, 00:29:17.910 "message": "Invalid parameters" 00:29:17.910 } 00:29:17.910 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:17.910 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:29:17.910 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:29:17.910 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:29:17.910 Adding namespace failed - expected result. 00:29:17.910 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:29:17.910 test case2: host connect to nvmf target in multiple paths 00:29:17.910 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:17.910 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.910 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:17.910 [2024-11-15 11:47:58.321493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:17.910 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.910 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:18.168 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:29:18.427 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:29:18.427 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:29:18.427 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:18.427 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:29:18.427 11:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:29:20.325 11:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:29:20.325 11:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:29:20.325 11:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:29:20.325 11:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:29:20.325 11:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:29:20.325 11:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:29:20.325 11:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:20.325 [global] 00:29:20.325 thread=1 00:29:20.325 invalidate=1 00:29:20.325 rw=write 00:29:20.325 time_based=1 00:29:20.325 runtime=1 00:29:20.325 ioengine=libaio 00:29:20.325 direct=1 00:29:20.325 bs=4096 00:29:20.325 iodepth=1 00:29:20.325 norandommap=0 00:29:20.325 numjobs=1 00:29:20.325 00:29:20.325 verify_dump=1 00:29:20.325 verify_backlog=512 00:29:20.325 verify_state_save=0 00:29:20.325 do_verify=1 00:29:20.325 verify=crc32c-intel 00:29:20.325 [job0] 00:29:20.325 filename=/dev/nvme0n1 00:29:20.325 Could not set queue depth (nvme0n1) 00:29:20.583 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:20.583 fio-3.35 00:29:20.583 Starting 1 thread 00:29:21.956 00:29:21.956 job0: (groupid=0, jobs=1): err= 0: pid=3081859: Fri Nov 15 11:48:02 2024 00:29:21.956 read: IOPS=262, BW=1049KiB/s (1074kB/s)(1076KiB/1026msec) 00:29:21.956 slat (nsec): min=5625, max=29031, avg=10608.12, stdev=6684.80 00:29:21.956 clat (usec): min=211, max=42013, avg=3322.88, stdev=10774.04 00:29:21.956 lat (usec): min=217, max=42028, avg=3333.49, stdev=10775.35 00:29:21.956 clat percentiles (usec): 00:29:21.956 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 239], 00:29:21.956 | 30.00th=[ 249], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 277], 00:29:21.956 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 408], 95.00th=[41157], 00:29:21.956 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:21.956 | 99.99th=[42206] 00:29:21.956 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:29:21.956 slat (usec): min=7, max=28035, avg=68.67, stdev=1238.40 00:29:21.956 clat (usec): min=146, max=345, avg=177.50, stdev=32.72 00:29:21.956 lat (usec): min=154, max=28249, avg=246.17, stdev=1240.53 00:29:21.956 clat percentiles (usec): 00:29:21.956 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 151], 20.00th=[ 155], 00:29:21.956 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 176], 00:29:21.956 | 70.00th=[ 184], 80.00th=[ 196], 90.00th=[ 221], 95.00th=[ 251], 00:29:21.956 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 347], 99.95th=[ 347], 00:29:21.956 | 99.99th=[ 347] 00:29:21.956 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:29:21.956 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:21.956 lat (usec) : 250=72.98%, 500=24.46% 00:29:21.956 lat (msec) : 50=2.56% 00:29:21.957 cpu : usr=0.88%, sys=0.98%, ctx=783, majf=0, minf=1 00:29:21.957 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:21.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.957 issued rwts: total=269,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.957 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:21.957 00:29:21.957 Run status group 0 (all jobs): 00:29:21.957 READ: bw=1049KiB/s (1074kB/s), 1049KiB/s-1049KiB/s (1074kB/s-1074kB/s), io=1076KiB (1102kB), run=1026-1026msec 00:29:21.957 WRITE: bw=1996KiB/s (2044kB/s), 1996KiB/s-1996KiB/s (2044kB/s-2044kB/s), io=2048KiB (2097kB), run=1026-1026msec 00:29:21.957 00:29:21.957 Disk stats (read/write): 00:29:21.957 nvme0n1: ios=291/512, merge=0/0, ticks=1717/81, in_queue=1798, util=98.60% 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:21.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.957 rmmod nvme_tcp 00:29:21.957 rmmod nvme_fabrics 00:29:21.957 rmmod nvme_keyring 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3081311 ']' 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3081311 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3081311 ']' 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3081311 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3081311 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3081311' 00:29:21.957 killing process with pid 3081311 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3081311 00:29:21.957 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3081311 00:29:22.215 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:22.215 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:22.215 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:22.215 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:29:22.215 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:29:22.215 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:22.215 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:29:22.215 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:22.215 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:22.215 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.215 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.215 11:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:24.749 00:29:24.749 real 0m9.126s 00:29:24.749 user 0m16.891s 00:29:24.749 sys 0m3.289s 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:24.749 ************************************ 00:29:24.749 END TEST nvmf_nmic 00:29:24.749 ************************************ 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:24.749 ************************************ 00:29:24.749 START TEST nvmf_fio_target 00:29:24.749 ************************************ 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:24.749 * Looking for test storage... 00:29:24.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:24.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.749 --rc genhtml_branch_coverage=1 00:29:24.749 --rc genhtml_function_coverage=1 00:29:24.749 --rc genhtml_legend=1 00:29:24.749 --rc geninfo_all_blocks=1 00:29:24.749 --rc geninfo_unexecuted_blocks=1 00:29:24.749 00:29:24.749 ' 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:24.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.749 --rc genhtml_branch_coverage=1 00:29:24.749 --rc genhtml_function_coverage=1 00:29:24.749 --rc genhtml_legend=1 00:29:24.749 --rc geninfo_all_blocks=1 00:29:24.749 --rc geninfo_unexecuted_blocks=1 00:29:24.749 00:29:24.749 ' 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:24.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.749 --rc genhtml_branch_coverage=1 00:29:24.749 --rc genhtml_function_coverage=1 00:29:24.749 --rc genhtml_legend=1 00:29:24.749 --rc geninfo_all_blocks=1 00:29:24.749 --rc geninfo_unexecuted_blocks=1 00:29:24.749 00:29:24.749 ' 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:24.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.749 --rc genhtml_branch_coverage=1 00:29:24.749 --rc genhtml_function_coverage=1 00:29:24.749 --rc genhtml_legend=1 00:29:24.749 --rc geninfo_all_blocks=1 00:29:24.749 --rc geninfo_unexecuted_blocks=1 00:29:24.749 00:29:24.749 ' 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.749 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:29:24.750 11:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:26.675 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:26.675 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:26.675 Found net devices under 0000:09:00.0: cvl_0_0 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.675 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:26.676 Found net devices under 0000:09:00.1: cvl_0_1 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:26.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:29:26.676 00:29:26.676 --- 10.0.0.2 ping statistics --- 00:29:26.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.676 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:29:26.676 00:29:26.676 --- 10.0.0.1 ping statistics --- 00:29:26.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.676 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:26.676 11:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:26.676 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:29:26.676 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:26.676 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:26.676 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:26.676 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3083997 00:29:26.676 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:26.676 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3083997 00:29:26.676 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3083997 ']' 00:29:26.676 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.676 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.676 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.676 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.676 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:26.676 [2024-11-15 11:48:07.056906] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:26.676 [2024-11-15 11:48:07.058013] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:29:26.676 [2024-11-15 11:48:07.058093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.969 [2024-11-15 11:48:07.135804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:26.969 [2024-11-15 11:48:07.199919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.969 [2024-11-15 11:48:07.199967] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.969 [2024-11-15 11:48:07.199996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:26.969 [2024-11-15 11:48:07.200007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:26.969 [2024-11-15 11:48:07.200017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.969 [2024-11-15 11:48:07.201758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.969 [2024-11-15 11:48:07.201821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:26.969 [2024-11-15 11:48:07.201887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:26.969 [2024-11-15 11:48:07.201890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.969 [2024-11-15 11:48:07.294468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:26.969 [2024-11-15 11:48:07.294691] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:26.969 [2024-11-15 11:48:07.295010] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:26.969 [2024-11-15 11:48:07.295703] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:26.969 [2024-11-15 11:48:07.295927] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:26.969 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:26.969 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:29:26.969 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:26.969 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:26.969 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:26.969 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.969 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:27.227 [2024-11-15 11:48:07.614554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.485 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:27.744 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:29:27.744 11:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:28.004 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:29:28.004 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:28.263 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:29:28.263 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:28.521 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:29:28.521 11:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:29:28.779 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:29.038 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:29:29.038 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:29.297 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:29:29.297 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:29.863 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:29:29.863 11:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:29:29.863 11:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:30.120 11:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:30.120 11:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:30.378 11:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:30.378 11:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:30.943 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.943 [2024-11-15 11:48:11.322769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.944 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:29:31.201 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:29:31.774 11:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:31.774 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:29:31.774 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:29:31.774 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:31.774 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:29:31.774 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:29:31.774 11:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:29:33.679 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:29:33.679 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:29:33.679 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:29:33.679 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:29:33.679 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:29:33.679 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:29:33.679 11:48:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:33.679 [global] 00:29:33.679 thread=1 00:29:33.679 invalidate=1 00:29:33.679 rw=write 00:29:33.679 time_based=1 00:29:33.679 runtime=1 00:29:33.679 ioengine=libaio 00:29:33.679 direct=1 00:29:33.679 bs=4096 00:29:33.679 iodepth=1 00:29:33.679 norandommap=0 00:29:33.679 numjobs=1 00:29:33.679 00:29:33.679 verify_dump=1 00:29:33.679 verify_backlog=512 00:29:33.679 verify_state_save=0 00:29:33.679 do_verify=1 00:29:33.679 verify=crc32c-intel 00:29:33.679 [job0] 00:29:33.679 filename=/dev/nvme0n1 00:29:33.679 [job1] 00:29:33.679 filename=/dev/nvme0n2 00:29:33.679 [job2] 00:29:33.679 filename=/dev/nvme0n3 00:29:33.679 [job3] 00:29:33.679 filename=/dev/nvme0n4 00:29:33.937 Could not set queue depth (nvme0n1) 00:29:33.937 Could not set queue depth (nvme0n2) 00:29:33.937 Could not set queue depth (nvme0n3) 00:29:33.937 Could not set queue depth (nvme0n4) 00:29:33.937 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:33.937 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:33.937 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:33.937 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:33.937 fio-3.35 00:29:33.937 Starting 4 threads 00:29:35.309 00:29:35.309 job0: (groupid=0, jobs=1): err= 0: pid=3085569: Fri Nov 15 11:48:15 2024 00:29:35.309 read: IOPS=1918, BW=7672KiB/s (7856kB/s)(7680KiB/1001msec) 00:29:35.309 slat (nsec): min=5381, max=32136, avg=7539.47, stdev=3522.53 00:29:35.309 clat (usec): min=186, max=624, avg=289.57, stdev=55.25 00:29:35.309 lat (usec): min=192, max=635, avg=297.11, stdev=56.50 00:29:35.309 clat percentiles (usec): 00:29:35.309 | 1.00th=[ 221], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 255], 00:29:35.309 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:29:35.309 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 355], 95.00th=[ 392], 00:29:35.309 | 99.00th=[ 553], 99.50th=[ 578], 99.90th=[ 594], 99.95th=[ 627], 00:29:35.309 | 99.99th=[ 627] 00:29:35.309 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:29:35.309 slat (nsec): min=6927, max=38077, avg=9317.82, stdev=3845.05 00:29:35.309 clat (usec): min=132, max=458, avg=195.36, stdev=27.42 00:29:35.309 lat (usec): min=139, max=467, avg=204.68, stdev=27.53 00:29:35.310 clat percentiles (usec): 00:29:35.310 | 1.00th=[ 137], 5.00th=[ 147], 10.00th=[ 161], 20.00th=[ 182], 00:29:35.310 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 200], 00:29:35.310 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 241], 00:29:35.310 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 297], 99.95th=[ 334], 00:29:35.310 | 99.99th=[ 457] 00:29:35.310 bw ( KiB/s): min= 8192, max= 8192, per=40.92%, avg=8192.00, stdev= 0.00, samples=1 00:29:35.310 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:29:35.310 lat (usec) : 250=53.91%, 500=45.34%, 750=0.76% 00:29:35.310 cpu : usr=2.60%, sys=4.80%, ctx=3968, majf=0, minf=2 00:29:35.310 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.310 issued rwts: total=1920,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.310 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:35.310 job1: (groupid=0, jobs=1): err= 0: pid=3085570: Fri Nov 15 11:48:15 2024 00:29:35.310 read: IOPS=22, BW=90.6KiB/s (92.7kB/s)(92.0KiB/1016msec) 00:29:35.310 slat (nsec): min=7528, max=18844, avg=14167.78, stdev=2346.06 00:29:35.310 clat (usec): min=245, max=41134, avg=39203.30, stdev=8492.73 00:29:35.310 lat (usec): min=261, max=41142, avg=39217.46, stdev=8492.34 00:29:35.310 clat percentiles (usec): 00:29:35.310 | 1.00th=[ 245], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:29:35.310 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:35.310 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:35.310 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:35.310 | 99.99th=[41157] 00:29:35.310 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:29:35.310 slat (nsec): min=7904, max=25255, avg=9256.34, stdev=2298.07 00:29:35.310 clat (usec): min=153, max=284, avg=201.39, stdev=21.32 00:29:35.310 lat (usec): min=161, max=293, avg=210.64, stdev=21.26 00:29:35.310 clat percentiles (usec): 00:29:35.310 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 182], 00:29:35.310 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 206], 00:29:35.310 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 219], 95.00th=[ 229], 00:29:35.310 | 99.00th=[ 281], 99.50th=[ 285], 99.90th=[ 285], 99.95th=[ 285], 00:29:35.310 | 99.99th=[ 285] 00:29:35.310 bw ( KiB/s): min= 4096, max= 4096, per=20.46%, avg=4096.00, stdev= 0.00, samples=1 00:29:35.310 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:35.310 lat (usec) : 250=92.90%, 500=2.99% 00:29:35.310 lat (msec) : 50=4.11% 00:29:35.310 cpu : usr=0.30%, sys=0.59%, ctx=536, majf=0, minf=1 00:29:35.310 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.310 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.310 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:35.310 job2: (groupid=0, jobs=1): err= 0: pid=3085571: Fri Nov 15 11:48:15 2024 00:29:35.310 read: IOPS=330, BW=1322KiB/s (1353kB/s)(1352KiB/1023msec) 00:29:35.310 slat (nsec): min=4270, max=23481, avg=5959.33, stdev=3037.90 00:29:35.310 clat (usec): min=203, max=42018, avg=2709.38, stdev=9817.54 00:29:35.310 lat (usec): min=211, max=42031, avg=2715.34, stdev=9819.32 00:29:35.310 clat percentiles (usec): 00:29:35.310 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 245], 00:29:35.310 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:29:35.310 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[41681], 00:29:35.310 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:35.310 | 99.99th=[42206] 00:29:35.310 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:29:35.310 slat (nsec): min=5673, max=24976, avg=6885.50, stdev=2640.71 00:29:35.310 clat (usec): min=167, max=246, avg=186.11, stdev= 9.27 00:29:35.310 lat (usec): min=173, max=271, avg=193.00, stdev= 9.88 00:29:35.310 clat percentiles (usec): 00:29:35.310 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 178], 00:29:35.310 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:29:35.310 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 198], 95.00th=[ 202], 00:29:35.310 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 247], 99.95th=[ 247], 00:29:35.310 | 99.99th=[ 247] 00:29:35.310 bw ( KiB/s): min= 4096, max= 4096, per=20.46%, avg=4096.00, stdev= 0.00, samples=1 00:29:35.310 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:35.310 lat (usec) : 250=77.41%, 500=20.24% 00:29:35.310 lat (msec) : 50=2.35% 00:29:35.310 cpu : usr=0.39%, sys=0.39%, ctx=852, majf=0, minf=1 00:29:35.310 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.310 issued rwts: total=338,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.310 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:35.310 job3: (groupid=0, jobs=1): err= 0: pid=3085572: Fri Nov 15 11:48:15 2024 00:29:35.310 read: IOPS=1820, BW=7281KiB/s (7455kB/s)(7288KiB/1001msec) 00:29:35.310 slat (nsec): min=5647, max=44550, avg=7920.71, stdev=3854.92 00:29:35.310 clat (usec): min=206, max=570, avg=288.34, stdev=47.84 00:29:35.310 lat (usec): min=212, max=579, avg=296.27, stdev=49.73 00:29:35.310 clat percentiles (usec): 00:29:35.310 | 1.00th=[ 229], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 255], 00:29:35.310 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:29:35.310 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 347], 95.00th=[ 383], 00:29:35.310 | 99.00th=[ 482], 99.50th=[ 506], 99.90th=[ 545], 99.95th=[ 570], 00:29:35.310 | 99.99th=[ 570] 00:29:35.310 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:29:35.310 slat (nsec): min=7195, max=38114, avg=9499.52, stdev=3067.43 00:29:35.310 clat (usec): min=141, max=497, avg=210.48, stdev=60.07 00:29:35.310 lat (usec): min=150, max=505, avg=219.98, stdev=60.77 00:29:35.310 clat percentiles (usec): 00:29:35.310 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 161], 20.00th=[ 184], 00:29:35.310 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 200], 00:29:35.310 | 70.00th=[ 208], 80.00th=[ 223], 90.00th=[ 249], 95.00th=[ 396], 00:29:35.310 | 99.00th=[ 429], 99.50th=[ 433], 99.90th=[ 469], 99.95th=[ 474], 00:29:35.310 | 99.99th=[ 498] 00:29:35.310 bw ( KiB/s): min= 8192, max= 8192, per=40.92%, avg=8192.00, stdev= 0.00, samples=1 00:29:35.310 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:29:35.310 lat (usec) : 250=51.65%, 500=48.04%, 750=0.31% 00:29:35.310 cpu : usr=2.50%, sys=4.80%, ctx=3870, majf=0, minf=1 00:29:35.310 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.310 issued rwts: total=1822,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.310 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:35.310 00:29:35.310 Run status group 0 (all jobs): 00:29:35.310 READ: bw=15.7MiB/s (16.4MB/s), 90.6KiB/s-7672KiB/s (92.7kB/s-7856kB/s), io=16.0MiB (16.8MB), run=1001-1023msec 00:29:35.310 WRITE: bw=19.5MiB/s (20.5MB/s), 2002KiB/s-8184KiB/s (2050kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1023msec 00:29:35.310 00:29:35.310 Disk stats (read/write): 00:29:35.310 nvme0n1: ios=1586/2009, merge=0/0, ticks=429/378, in_queue=807, util=87.27% 00:29:35.310 nvme0n2: ios=71/512, merge=0/0, ticks=1486/97, in_queue=1583, util=94.82% 00:29:35.310 nvme0n3: ios=388/512, merge=0/0, ticks=904/91, in_queue=995, util=98.75% 00:29:35.310 nvme0n4: ios=1593/1808, merge=0/0, ticks=657/352, in_queue=1009, util=95.90% 00:29:35.310 11:48:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:29:35.310 [global] 00:29:35.310 thread=1 00:29:35.310 invalidate=1 00:29:35.310 rw=randwrite 00:29:35.310 time_based=1 00:29:35.310 runtime=1 00:29:35.310 ioengine=libaio 00:29:35.310 direct=1 00:29:35.310 bs=4096 00:29:35.310 iodepth=1 00:29:35.310 norandommap=0 00:29:35.310 numjobs=1 00:29:35.310 00:29:35.310 verify_dump=1 00:29:35.310 verify_backlog=512 00:29:35.310 verify_state_save=0 00:29:35.310 do_verify=1 00:29:35.310 verify=crc32c-intel 00:29:35.310 [job0] 00:29:35.310 filename=/dev/nvme0n1 00:29:35.310 [job1] 00:29:35.310 filename=/dev/nvme0n2 00:29:35.310 [job2] 00:29:35.310 filename=/dev/nvme0n3 00:29:35.310 [job3] 00:29:35.310 filename=/dev/nvme0n4 00:29:35.310 Could not set queue depth (nvme0n1) 00:29:35.310 Could not set queue depth (nvme0n2) 00:29:35.310 Could not set queue depth (nvme0n3) 00:29:35.310 Could not set queue depth (nvme0n4) 00:29:35.568 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:35.568 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:35.568 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:35.568 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:35.568 fio-3.35 00:29:35.568 Starting 4 threads 00:29:36.941 00:29:36.941 job0: (groupid=0, jobs=1): err= 0: pid=3085792: Fri Nov 15 11:48:17 2024 00:29:36.941 read: IOPS=20, BW=80.7KiB/s (82.6kB/s)(84.0KiB/1041msec) 00:29:36.941 slat (nsec): min=13152, max=16242, avg=14349.71, stdev=978.31 00:29:36.941 clat (usec): min=40860, max=42028, avg=41041.26, stdev=242.33 00:29:36.941 lat (usec): min=40875, max=42041, avg=41055.61, stdev=242.18 00:29:36.941 clat percentiles (usec): 00:29:36.941 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:29:36.941 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:36.941 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:36.941 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:36.941 | 99.99th=[42206] 00:29:36.941 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:29:36.941 slat (nsec): min=6462, max=63283, avg=19908.06, stdev=11064.03 00:29:36.941 clat (usec): min=171, max=559, avg=323.58, stdev=76.17 00:29:36.941 lat (usec): min=187, max=600, avg=343.48, stdev=77.61 00:29:36.941 clat percentiles (usec): 00:29:36.941 | 1.00th=[ 198], 5.00th=[ 219], 10.00th=[ 229], 20.00th=[ 243], 00:29:36.941 | 30.00th=[ 269], 40.00th=[ 289], 50.00th=[ 322], 60.00th=[ 355], 00:29:36.941 | 70.00th=[ 375], 80.00th=[ 396], 90.00th=[ 424], 95.00th=[ 449], 00:29:36.941 | 99.00th=[ 494], 99.50th=[ 502], 99.90th=[ 562], 99.95th=[ 562], 00:29:36.941 | 99.99th=[ 562] 00:29:36.941 bw ( KiB/s): min= 4096, max= 4096, per=26.03%, avg=4096.00, stdev= 0.00, samples=1 00:29:36.941 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:36.941 lat (usec) : 250=21.76%, 500=73.73%, 750=0.56% 00:29:36.941 lat (msec) : 50=3.94% 00:29:36.941 cpu : usr=0.38%, sys=1.06%, ctx=534, majf=0, minf=1 00:29:36.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:36.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.941 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:36.941 job1: (groupid=0, jobs=1): err= 0: pid=3085793: Fri Nov 15 11:48:17 2024 00:29:36.941 read: IOPS=562, BW=2248KiB/s (2302kB/s)(2284KiB/1016msec) 00:29:36.941 slat (nsec): min=6090, max=67150, avg=13488.67, stdev=7313.75 00:29:36.941 clat (usec): min=228, max=41039, avg=1384.49, stdev=6500.23 00:29:36.941 lat (usec): min=235, max=41053, avg=1397.97, stdev=6500.33 00:29:36.941 clat percentiles (usec): 00:29:36.941 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 251], 00:29:36.941 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 289], 00:29:36.941 | 70.00th=[ 318], 80.00th=[ 437], 90.00th=[ 482], 95.00th=[ 537], 00:29:36.941 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:36.941 | 99.99th=[41157] 00:29:36.941 write: IOPS=1007, BW=4031KiB/s (4128kB/s)(4096KiB/1016msec); 0 zone resets 00:29:36.941 slat (nsec): min=7789, max=44593, avg=12629.03, stdev=6934.92 00:29:36.941 clat (usec): min=163, max=556, avg=193.91, stdev=24.63 00:29:36.941 lat (usec): min=171, max=568, avg=206.54, stdev=28.41 00:29:36.941 clat percentiles (usec): 00:29:36.941 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 176], 00:29:36.941 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 196], 00:29:36.941 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 235], 00:29:36.941 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 306], 99.95th=[ 553], 00:29:36.941 | 99.99th=[ 553] 00:29:36.941 bw ( KiB/s): min= 8192, max= 8192, per=52.05%, avg=8192.00, stdev= 0.00, samples=1 00:29:36.941 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:29:36.941 lat (usec) : 250=69.34%, 500=28.34%, 750=1.25%, 1000=0.06% 00:29:36.941 lat (msec) : 2=0.06%, 50=0.94% 00:29:36.941 cpu : usr=1.48%, sys=2.66%, ctx=1596, majf=0, minf=1 00:29:36.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:36.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.941 issued rwts: total=571,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:36.941 job2: (groupid=0, jobs=1): err= 0: pid=3085794: Fri Nov 15 11:48:17 2024 00:29:36.941 read: IOPS=1931, BW=7724KiB/s (7910kB/s)(7732KiB/1001msec) 00:29:36.941 slat (nsec): min=4665, max=49214, avg=7155.04, stdev=3957.20 00:29:36.941 clat (usec): min=205, max=2984, avg=260.63, stdev=91.60 00:29:36.941 lat (usec): min=210, max=2989, avg=267.79, stdev=92.98 00:29:36.941 clat percentiles (usec): 00:29:36.941 | 1.00th=[ 219], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 229], 00:29:36.941 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 247], 00:29:36.941 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 285], 95.00th=[ 424], 00:29:36.941 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 963], 99.95th=[ 2999], 00:29:36.941 | 99.99th=[ 2999] 00:29:36.941 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:29:36.941 slat (nsec): min=6000, max=69202, avg=10486.55, stdev=7951.67 00:29:36.941 clat (usec): min=137, max=564, avg=220.27, stdev=75.48 00:29:36.941 lat (usec): min=144, max=580, avg=230.76, stdev=79.86 00:29:36.941 clat percentiles (usec): 00:29:36.941 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 169], 00:29:36.941 | 30.00th=[ 178], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 200], 00:29:36.941 | 70.00th=[ 210], 80.00th=[ 249], 90.00th=[ 347], 95.00th=[ 408], 00:29:36.941 | 99.00th=[ 478], 99.50th=[ 490], 99.90th=[ 523], 99.95th=[ 545], 00:29:36.941 | 99.99th=[ 562] 00:29:36.941 bw ( KiB/s): min= 8192, max= 8192, per=52.05%, avg=8192.00, stdev= 0.00, samples=1 00:29:36.941 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:29:36.941 lat (usec) : 250=74.20%, 500=24.54%, 750=1.18%, 1000=0.05% 00:29:36.941 lat (msec) : 4=0.03% 00:29:36.941 cpu : usr=2.30%, sys=3.10%, ctx=3982, majf=0, minf=2 00:29:36.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:36.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.941 issued rwts: total=1933,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:36.941 job3: (groupid=0, jobs=1): err= 0: pid=3085795: Fri Nov 15 11:48:17 2024 00:29:36.941 read: IOPS=25, BW=100KiB/s (103kB/s)(104KiB/1037msec) 00:29:36.941 slat (nsec): min=8236, max=23050, avg=14883.58, stdev=3128.24 00:29:36.941 clat (usec): min=239, max=41224, avg=33158.96, stdev=16334.92 00:29:36.941 lat (usec): min=253, max=41243, avg=33173.84, stdev=16333.82 00:29:36.941 clat percentiles (usec): 00:29:36.941 | 1.00th=[ 239], 5.00th=[ 326], 10.00th=[ 347], 20.00th=[40633], 00:29:36.941 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:36.941 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:36.941 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:36.941 | 99.99th=[41157] 00:29:36.941 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:29:36.941 slat (nsec): min=7146, max=56019, avg=19400.91, stdev=9333.52 00:29:36.941 clat (usec): min=163, max=560, avg=316.07, stdev=81.43 00:29:36.941 lat (usec): min=173, max=568, avg=335.47, stdev=79.80 00:29:36.941 clat percentiles (usec): 00:29:36.941 | 1.00th=[ 174], 5.00th=[ 196], 10.00th=[ 210], 20.00th=[ 231], 00:29:36.941 | 30.00th=[ 262], 40.00th=[ 285], 50.00th=[ 310], 60.00th=[ 334], 00:29:36.941 | 70.00th=[ 371], 80.00th=[ 404], 90.00th=[ 429], 95.00th=[ 449], 00:29:36.941 | 99.00th=[ 469], 99.50th=[ 506], 99.90th=[ 562], 99.95th=[ 562], 00:29:36.941 | 99.99th=[ 562] 00:29:36.941 bw ( KiB/s): min= 4096, max= 4096, per=26.03%, avg=4096.00, stdev= 0.00, samples=1 00:29:36.941 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:36.941 lat (usec) : 250=24.54%, 500=71.00%, 750=0.56% 00:29:36.941 lat (msec) : 50=3.90% 00:29:36.941 cpu : usr=0.68%, sys=0.97%, ctx=540, majf=0, minf=1 00:29:36.941 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:36.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.941 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.941 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.941 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:36.941 00:29:36.941 Run status group 0 (all jobs): 00:29:36.941 READ: bw=9802KiB/s (10.0MB/s), 80.7KiB/s-7724KiB/s (82.6kB/s-7910kB/s), io=9.96MiB (10.4MB), run=1001-1041msec 00:29:36.941 WRITE: bw=15.4MiB/s (16.1MB/s), 1967KiB/s-8184KiB/s (2015kB/s-8380kB/s), io=16.0MiB (16.8MB), run=1001-1041msec 00:29:36.941 00:29:36.941 Disk stats (read/write): 00:29:36.941 nvme0n1: ios=72/512, merge=0/0, ticks=1016/155, in_queue=1171, util=91.38% 00:29:36.941 nvme0n2: ios=616/1024, merge=0/0, ticks=1012/196, in_queue=1208, util=94.52% 00:29:36.941 nvme0n3: ios=1585/1816, merge=0/0, ticks=1027/403, in_queue=1430, util=96.88% 00:29:36.941 nvme0n4: ios=80/512, merge=0/0, ticks=1225/155, in_queue=1380, util=98.74% 00:29:36.941 11:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:29:36.941 [global] 00:29:36.941 thread=1 00:29:36.941 invalidate=1 00:29:36.941 rw=write 00:29:36.941 time_based=1 00:29:36.941 runtime=1 00:29:36.941 ioengine=libaio 00:29:36.941 direct=1 00:29:36.941 bs=4096 00:29:36.941 iodepth=128 00:29:36.941 norandommap=0 00:29:36.942 numjobs=1 00:29:36.942 00:29:36.942 verify_dump=1 00:29:36.942 verify_backlog=512 00:29:36.942 verify_state_save=0 00:29:36.942 do_verify=1 00:29:36.942 verify=crc32c-intel 00:29:36.942 [job0] 00:29:36.942 filename=/dev/nvme0n1 00:29:36.942 [job1] 00:29:36.942 filename=/dev/nvme0n2 00:29:36.942 [job2] 00:29:36.942 filename=/dev/nvme0n3 00:29:36.942 [job3] 00:29:36.942 filename=/dev/nvme0n4 00:29:36.942 Could not set queue depth (nvme0n1) 00:29:36.942 Could not set queue depth (nvme0n2) 00:29:36.942 Could not set queue depth (nvme0n3) 00:29:36.942 Could not set queue depth (nvme0n4) 00:29:36.942 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:36.942 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:36.942 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:36.942 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:36.942 fio-3.35 00:29:36.942 Starting 4 threads 00:29:38.322 00:29:38.322 job0: (groupid=0, jobs=1): err= 0: pid=3086027: Fri Nov 15 11:48:18 2024 00:29:38.322 read: IOPS=2678, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1003msec) 00:29:38.322 slat (nsec): min=1997, max=13967k, avg=183691.84, stdev=1036205.57 00:29:38.322 clat (usec): min=655, max=51683, avg=21742.36, stdev=8779.73 00:29:38.322 lat (usec): min=2528, max=51690, avg=21926.05, stdev=8863.75 00:29:38.322 clat percentiles (usec): 00:29:38.322 | 1.00th=[ 4817], 5.00th=[11994], 10.00th=[12649], 20.00th=[14877], 00:29:38.322 | 30.00th=[15664], 40.00th=[16909], 50.00th=[20841], 60.00th=[23725], 00:29:38.322 | 70.00th=[25560], 80.00th=[27395], 90.00th=[32375], 95.00th=[38536], 00:29:38.322 | 99.00th=[49021], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:29:38.322 | 99.99th=[51643] 00:29:38.322 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:29:38.322 slat (usec): min=2, max=7579, avg=159.23, stdev=727.06 00:29:38.322 clat (usec): min=7110, max=47241, avg=22288.76, stdev=11435.35 00:29:38.322 lat (usec): min=7134, max=47246, avg=22447.99, stdev=11508.80 00:29:38.322 clat percentiles (usec): 00:29:38.322 | 1.00th=[ 7504], 5.00th=[10421], 10.00th=[11076], 20.00th=[12125], 00:29:38.322 | 30.00th=[12387], 40.00th=[14484], 50.00th=[19792], 60.00th=[22414], 00:29:38.322 | 70.00th=[24511], 80.00th=[35390], 90.00th=[42730], 95.00th=[44303], 00:29:38.322 | 99.00th=[44827], 99.50th=[45351], 99.90th=[46924], 99.95th=[46924], 00:29:38.322 | 99.99th=[47449] 00:29:38.322 bw ( KiB/s): min=11560, max=13008, per=18.71%, avg=12284.00, stdev=1023.89, samples=2 00:29:38.322 iops : min= 2890, max= 3252, avg=3071.00, stdev=255.97, samples=2 00:29:38.322 lat (usec) : 750=0.02% 00:29:38.322 lat (msec) : 4=0.33%, 10=2.73%, 20=47.13%, 50=49.37%, 100=0.43% 00:29:38.322 cpu : usr=3.29%, sys=2.59%, ctx=298, majf=0, minf=1 00:29:38.322 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:29:38.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:38.322 issued rwts: total=2687,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.322 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:38.322 job1: (groupid=0, jobs=1): err= 0: pid=3086028: Fri Nov 15 11:48:18 2024 00:29:38.322 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:29:38.322 slat (usec): min=3, max=6153, avg=88.53, stdev=472.62 00:29:38.322 clat (usec): min=8033, max=27487, avg=11858.85, stdev=2596.00 00:29:38.322 lat (usec): min=8040, max=27503, avg=11947.39, stdev=2637.32 00:29:38.322 clat percentiles (usec): 00:29:38.322 | 1.00th=[ 8848], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10552], 00:29:38.322 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:29:38.322 | 70.00th=[11731], 80.00th=[12256], 90.00th=[14222], 95.00th=[18744], 00:29:38.322 | 99.00th=[21890], 99.50th=[23987], 99.90th=[27395], 99.95th=[27395], 00:29:38.322 | 99.99th=[27395] 00:29:38.322 write: IOPS=5378, BW=21.0MiB/s (22.0MB/s)(21.1MiB/1002msec); 0 zone resets 00:29:38.323 slat (usec): min=4, max=6298, avg=91.71, stdev=481.73 00:29:38.323 clat (usec): min=584, max=27534, avg=12164.10, stdev=2829.84 00:29:38.323 lat (usec): min=4075, max=27550, avg=12255.81, stdev=2860.33 00:29:38.323 clat percentiles (usec): 00:29:38.323 | 1.00th=[ 7701], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[10683], 00:29:38.323 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:29:38.323 | 70.00th=[11731], 80.00th=[14091], 90.00th=[16450], 95.00th=[18482], 00:29:38.323 | 99.00th=[22152], 99.50th=[22676], 99.90th=[26608], 99.95th=[27395], 00:29:38.323 | 99.99th=[27657] 00:29:38.323 bw ( KiB/s): min=20480, max=20480, per=31.20%, avg=20480.00, stdev= 0.00, samples=1 00:29:38.323 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:29:38.323 lat (usec) : 750=0.01% 00:29:38.323 lat (msec) : 10=7.45%, 20=90.45%, 50=2.09% 00:29:38.323 cpu : usr=6.99%, sys=10.59%, ctx=490, majf=0, minf=1 00:29:38.323 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:29:38.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:38.323 issued rwts: total=5120,5389,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.323 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:38.323 job2: (groupid=0, jobs=1): err= 0: pid=3086029: Fri Nov 15 11:48:18 2024 00:29:38.323 read: IOPS=4706, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1006msec) 00:29:38.323 slat (usec): min=2, max=11136, avg=98.73, stdev=723.32 00:29:38.323 clat (usec): min=4925, max=28617, avg=13060.19, stdev=3220.31 00:29:38.323 lat (usec): min=5904, max=30003, avg=13158.92, stdev=3276.61 00:29:38.323 clat percentiles (usec): 00:29:38.323 | 1.00th=[ 7767], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[11207], 00:29:38.323 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:29:38.323 | 70.00th=[13173], 80.00th=[14484], 90.00th=[17957], 95.00th=[20055], 00:29:38.323 | 99.00th=[23462], 99.50th=[25035], 99.90th=[28705], 99.95th=[28705], 00:29:38.323 | 99.99th=[28705] 00:29:38.323 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:29:38.323 slat (usec): min=3, max=11411, avg=94.66, stdev=605.74 00:29:38.323 clat (usec): min=1636, max=35665, avg=12755.66, stdev=4000.86 00:29:38.323 lat (usec): min=3639, max=38835, avg=12850.32, stdev=4033.57 00:29:38.323 clat percentiles (usec): 00:29:38.323 | 1.00th=[ 6980], 5.00th=[ 7701], 10.00th=[ 8717], 20.00th=[ 9896], 00:29:38.323 | 30.00th=[11338], 40.00th=[12125], 50.00th=[12518], 60.00th=[12911], 00:29:38.323 | 70.00th=[13173], 80.00th=[13698], 90.00th=[16909], 95.00th=[19006], 00:29:38.323 | 99.00th=[31589], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:29:38.323 | 99.99th=[35914] 00:29:38.323 bw ( KiB/s): min=20480, max=20480, per=31.20%, avg=20480.00, stdev= 0.00, samples=2 00:29:38.323 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:29:38.323 lat (msec) : 2=0.01%, 4=0.06%, 10=14.88%, 20=81.20%, 50=3.86% 00:29:38.323 cpu : usr=6.37%, sys=7.86%, ctx=381, majf=0, minf=1 00:29:38.323 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:29:38.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:38.323 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.323 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:38.323 job3: (groupid=0, jobs=1): err= 0: pid=3086030: Fri Nov 15 11:48:18 2024 00:29:38.323 read: IOPS=3017, BW=11.8MiB/s (12.4MB/s)(12.3MiB/1046msec) 00:29:38.323 slat (usec): min=2, max=13045, avg=139.89, stdev=834.80 00:29:38.323 clat (usec): min=8154, max=54876, avg=18910.05, stdev=6859.35 00:29:38.323 lat (usec): min=8163, max=54881, avg=19049.94, stdev=6906.18 00:29:38.323 clat percentiles (usec): 00:29:38.323 | 1.00th=[10421], 5.00th=[11731], 10.00th=[13173], 20.00th=[13829], 00:29:38.323 | 30.00th=[15401], 40.00th=[16450], 50.00th=[16712], 60.00th=[18220], 00:29:38.323 | 70.00th=[20055], 80.00th=[23987], 90.00th=[25560], 95.00th=[28705], 00:29:38.323 | 99.00th=[51119], 99.50th=[52691], 99.90th=[54264], 99.95th=[54264], 00:29:38.323 | 99.99th=[54789] 00:29:38.323 write: IOPS=3426, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1046msec); 0 zone resets 00:29:38.323 slat (usec): min=3, max=9458, avg=146.23, stdev=743.18 00:29:38.323 clat (usec): min=6863, max=68353, avg=20014.16, stdev=9100.70 00:29:38.323 lat (usec): min=6869, max=68360, avg=20160.40, stdev=9129.00 00:29:38.323 clat percentiles (usec): 00:29:38.323 | 1.00th=[ 7177], 5.00th=[ 9896], 10.00th=[12125], 20.00th=[13829], 00:29:38.323 | 30.00th=[15270], 40.00th=[16581], 50.00th=[17171], 60.00th=[19006], 00:29:38.323 | 70.00th=[21365], 80.00th=[24511], 90.00th=[30540], 95.00th=[38011], 00:29:38.323 | 99.00th=[57934], 99.50th=[58983], 99.90th=[68682], 99.95th=[68682], 00:29:38.323 | 99.99th=[68682] 00:29:38.323 bw ( KiB/s): min=13840, max=14480, per=21.57%, avg=14160.00, stdev=452.55, samples=2 00:29:38.323 iops : min= 3460, max= 3620, avg=3540.00, stdev=113.14, samples=2 00:29:38.323 lat (msec) : 10=3.13%, 20=63.89%, 50=31.10%, 100=1.88% 00:29:38.323 cpu : usr=4.78%, sys=6.03%, ctx=309, majf=0, minf=1 00:29:38.323 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:29:38.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:38.323 issued rwts: total=3156,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.323 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:38.323 00:29:38.323 Run status group 0 (all jobs): 00:29:38.323 READ: bw=58.6MiB/s (61.5MB/s), 10.5MiB/s-20.0MiB/s (11.0MB/s-20.9MB/s), io=61.3MiB (64.3MB), run=1002-1046msec 00:29:38.323 WRITE: bw=64.1MiB/s (67.2MB/s), 12.0MiB/s-21.0MiB/s (12.5MB/s-22.0MB/s), io=67.1MiB (70.3MB), run=1002-1046msec 00:29:38.323 00:29:38.323 Disk stats (read/write): 00:29:38.323 nvme0n1: ios=2091/2407, merge=0/0, ticks=15820/16867, in_queue=32687, util=84.07% 00:29:38.323 nvme0n2: ios=4136/4295, merge=0/0, ticks=16240/16365, in_queue=32605, util=96.92% 00:29:38.323 nvme0n3: ios=3830/4096, merge=0/0, ticks=38741/37022, in_queue=75763, util=87.37% 00:29:38.323 nvme0n4: ios=2606/2962, merge=0/0, ticks=20482/22780, in_queue=43262, util=98.90% 00:29:38.323 11:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:29:38.323 [global] 00:29:38.323 thread=1 00:29:38.323 invalidate=1 00:29:38.323 rw=randwrite 00:29:38.323 time_based=1 00:29:38.323 runtime=1 00:29:38.323 ioengine=libaio 00:29:38.323 direct=1 00:29:38.323 bs=4096 00:29:38.323 iodepth=128 00:29:38.323 norandommap=0 00:29:38.323 numjobs=1 00:29:38.323 00:29:38.323 verify_dump=1 00:29:38.323 verify_backlog=512 00:29:38.323 verify_state_save=0 00:29:38.323 do_verify=1 00:29:38.323 verify=crc32c-intel 00:29:38.323 [job0] 00:29:38.323 filename=/dev/nvme0n1 00:29:38.323 [job1] 00:29:38.323 filename=/dev/nvme0n2 00:29:38.323 [job2] 00:29:38.323 filename=/dev/nvme0n3 00:29:38.323 [job3] 00:29:38.323 filename=/dev/nvme0n4 00:29:38.323 Could not set queue depth (nvme0n1) 00:29:38.323 Could not set queue depth (nvme0n2) 00:29:38.323 Could not set queue depth (nvme0n3) 00:29:38.323 Could not set queue depth (nvme0n4) 00:29:38.581 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:38.581 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:38.581 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:38.581 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:38.581 fio-3.35 00:29:38.581 Starting 4 threads 00:29:39.956 00:29:39.956 job0: (groupid=0, jobs=1): err= 0: pid=3086329: Fri Nov 15 11:48:19 2024 00:29:39.956 read: IOPS=2514, BW=9.82MiB/s (10.3MB/s)(10.0MiB/1018msec) 00:29:39.956 slat (usec): min=3, max=18654, avg=203.47, stdev=1304.14 00:29:39.956 clat (usec): min=4970, max=90583, avg=19995.63, stdev=13946.57 00:29:39.956 lat (usec): min=4977, max=90590, avg=20199.10, stdev=14134.12 00:29:39.956 clat percentiles (usec): 00:29:39.956 | 1.00th=[ 5538], 5.00th=[12911], 10.00th=[13042], 20.00th=[13435], 00:29:39.956 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[15664], 00:29:39.956 | 70.00th=[19530], 80.00th=[22938], 90.00th=[34866], 95.00th=[54264], 00:29:39.956 | 99.00th=[84411], 99.50th=[88605], 99.90th=[90702], 99.95th=[90702], 00:29:39.956 | 99.99th=[90702] 00:29:39.956 write: IOPS=2756, BW=10.8MiB/s (11.3MB/s)(11.0MiB/1018msec); 0 zone resets 00:29:39.956 slat (usec): min=5, max=32828, avg=164.12, stdev=1190.22 00:29:39.956 clat (usec): min=3517, max=90585, avg=27700.31, stdev=16891.18 00:29:39.956 lat (usec): min=3524, max=90594, avg=27864.43, stdev=16955.12 00:29:39.956 clat percentiles (usec): 00:29:39.956 | 1.00th=[ 5211], 5.00th=[ 8094], 10.00th=[11994], 20.00th=[13042], 00:29:39.956 | 30.00th=[19006], 40.00th=[23987], 50.00th=[26346], 60.00th=[27132], 00:29:39.956 | 70.00th=[29492], 80.00th=[33162], 90.00th=[49546], 95.00th=[73925], 00:29:39.956 | 99.00th=[79168], 99.50th=[80217], 99.90th=[86508], 99.95th=[90702], 00:29:39.956 | 99.99th=[90702] 00:29:39.956 bw ( KiB/s): min= 9160, max=12247, per=16.44%, avg=10703.50, stdev=2182.84, samples=2 00:29:39.956 iops : min= 2290, max= 3061, avg=2675.50, stdev=545.18, samples=2 00:29:39.956 lat (msec) : 4=0.22%, 10=4.77%, 20=48.43%, 50=39.00%, 100=7.57% 00:29:39.956 cpu : usr=2.56%, sys=3.05%, ctx=278, majf=0, minf=1 00:29:39.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:29:39.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:39.956 issued rwts: total=2560,2806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.956 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:39.956 job1: (groupid=0, jobs=1): err= 0: pid=3086348: Fri Nov 15 11:48:19 2024 00:29:39.956 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:29:39.956 slat (usec): min=3, max=4682, avg=79.53, stdev=503.25 00:29:39.956 clat (usec): min=3343, max=16568, avg=10397.11, stdev=2086.48 00:29:39.956 lat (usec): min=3367, max=18170, avg=10476.64, stdev=2115.99 00:29:39.956 clat percentiles (usec): 00:29:39.956 | 1.00th=[ 4490], 5.00th=[ 7439], 10.00th=[ 8979], 20.00th=[ 9241], 00:29:39.956 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10159], 00:29:39.956 | 70.00th=[11076], 80.00th=[12387], 90.00th=[13566], 95.00th=[13829], 00:29:39.956 | 99.00th=[15139], 99.50th=[16450], 99.90th=[16581], 99.95th=[16581], 00:29:39.956 | 99.99th=[16581] 00:29:39.956 write: IOPS=6069, BW=23.7MiB/s (24.9MB/s)(23.8MiB/1002msec); 0 zone resets 00:29:39.956 slat (usec): min=3, max=31069, avg=83.65, stdev=625.86 00:29:39.956 clat (usec): min=512, max=44311, avg=11249.85, stdev=4827.04 00:29:39.956 lat (usec): min=1081, max=63325, avg=11333.50, stdev=4867.68 00:29:39.956 clat percentiles (usec): 00:29:39.956 | 1.00th=[ 5538], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[ 9896], 00:29:39.956 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:29:39.956 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11469], 95.00th=[14222], 00:29:39.956 | 99.00th=[39060], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:29:39.956 | 99.99th=[44303] 00:29:39.956 bw ( KiB/s): min=24055, max=24055, per=36.95%, avg=24055.00, stdev= 0.00, samples=1 00:29:39.956 iops : min= 6013, max= 6013, avg=6013.00, stdev= 0.00, samples=1 00:29:39.956 lat (usec) : 750=0.02% 00:29:39.956 lat (msec) : 2=0.01%, 4=0.21%, 10=39.63%, 20=58.50%, 50=1.63% 00:29:39.956 cpu : usr=5.19%, sys=7.39%, ctx=492, majf=0, minf=1 00:29:39.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:29:39.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:39.956 issued rwts: total=5632,6082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.956 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:39.956 job2: (groupid=0, jobs=1): err= 0: pid=3086378: Fri Nov 15 11:48:19 2024 00:29:39.956 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:29:39.956 slat (usec): min=3, max=5658, avg=93.68, stdev=586.84 00:29:39.956 clat (usec): min=7607, max=18647, avg=12340.42, stdev=1700.67 00:29:39.956 lat (usec): min=7616, max=19131, avg=12434.10, stdev=1748.95 00:29:39.956 clat percentiles (usec): 00:29:39.956 | 1.00th=[ 8455], 5.00th=[10028], 10.00th=[10683], 20.00th=[11207], 00:29:39.956 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12125], 60.00th=[12518], 00:29:39.956 | 70.00th=[12649], 80.00th=[13173], 90.00th=[14877], 95.00th=[16057], 00:29:39.956 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[18220], 00:29:39.956 | 99.99th=[18744] 00:29:39.956 write: IOPS=5239, BW=20.5MiB/s (21.5MB/s)(20.6MiB/1006msec); 0 zone resets 00:29:39.957 slat (usec): min=4, max=5654, avg=87.26, stdev=489.14 00:29:39.957 clat (usec): min=5299, max=17986, avg=12171.93, stdev=1302.18 00:29:39.957 lat (usec): min=5889, max=18001, avg=12259.19, stdev=1372.37 00:29:39.957 clat percentiles (usec): 00:29:39.957 | 1.00th=[ 7635], 5.00th=[10159], 10.00th=[11207], 20.00th=[11731], 00:29:39.957 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:29:39.957 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13042], 95.00th=[14484], 00:29:39.957 | 99.00th=[16712], 99.50th=[17433], 99.90th=[17695], 99.95th=[17957], 00:29:39.957 | 99.99th=[17957] 00:29:39.957 bw ( KiB/s): min=20439, max=20672, per=31.57%, avg=20555.50, stdev=164.76, samples=2 00:29:39.957 iops : min= 5109, max= 5168, avg=5138.50, stdev=41.72, samples=2 00:29:39.957 lat (msec) : 10=5.08%, 20=94.92% 00:29:39.957 cpu : usr=8.46%, sys=11.34%, ctx=390, majf=0, minf=1 00:29:39.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:29:39.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:39.957 issued rwts: total=5120,5271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:39.957 job3: (groupid=0, jobs=1): err= 0: pid=3086379: Fri Nov 15 11:48:19 2024 00:29:39.957 read: IOPS=2011, BW=8047KiB/s (8240kB/s)(8192KiB/1018msec) 00:29:39.957 slat (usec): min=3, max=21182, avg=172.44, stdev=1174.49 00:29:39.957 clat (usec): min=4752, max=46655, avg=20822.65, stdev=8282.31 00:29:39.957 lat (usec): min=4758, max=46668, avg=20995.08, stdev=8352.82 00:29:39.957 clat percentiles (usec): 00:29:39.957 | 1.00th=[ 7439], 5.00th=[12780], 10.00th=[14222], 20.00th=[14484], 00:29:39.957 | 30.00th=[15008], 40.00th=[15664], 50.00th=[16188], 60.00th=[21103], 00:29:39.957 | 70.00th=[23987], 80.00th=[29492], 90.00th=[36439], 95.00th=[37487], 00:29:39.957 | 99.00th=[40633], 99.50th=[41681], 99.90th=[42730], 99.95th=[45351], 00:29:39.957 | 99.99th=[46400] 00:29:39.957 write: IOPS=2367, BW=9470KiB/s (9697kB/s)(9640KiB/1018msec); 0 zone resets 00:29:39.957 slat (usec): min=4, max=26534, avg=257.48, stdev=1400.07 00:29:39.957 clat (msec): min=3, max=113, avg=35.89, stdev=20.84 00:29:39.957 lat (msec): min=3, max=113, avg=36.15, stdev=20.94 00:29:39.957 clat percentiles (msec): 00:29:39.957 | 1.00th=[ 7], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 25], 00:29:39.957 | 30.00th=[ 27], 40.00th=[ 28], 50.00th=[ 29], 60.00th=[ 31], 00:29:39.957 | 70.00th=[ 35], 80.00th=[ 52], 90.00th=[ 67], 95.00th=[ 74], 00:29:39.957 | 99.00th=[ 109], 99.50th=[ 112], 99.90th=[ 113], 99.95th=[ 113], 00:29:39.957 | 99.99th=[ 113] 00:29:39.957 bw ( KiB/s): min= 8560, max= 9704, per=14.03%, avg=9132.00, stdev=808.93, samples=2 00:29:39.957 iops : min= 2140, max= 2426, avg=2283.00, stdev=202.23, samples=2 00:29:39.957 lat (msec) : 4=0.13%, 10=2.62%, 20=30.01%, 50=56.37%, 100=9.62% 00:29:39.957 lat (msec) : 250=1.23% 00:29:39.957 cpu : usr=3.05%, sys=5.51%, ctx=267, majf=0, minf=1 00:29:39.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:29:39.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:39.957 issued rwts: total=2048,2410,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:39.957 00:29:39.957 Run status group 0 (all jobs): 00:29:39.957 READ: bw=58.9MiB/s (61.8MB/s), 8047KiB/s-22.0MiB/s (8240kB/s-23.0MB/s), io=60.0MiB (62.9MB), run=1002-1018msec 00:29:39.957 WRITE: bw=63.6MiB/s (66.7MB/s), 9470KiB/s-23.7MiB/s (9697kB/s-24.9MB/s), io=64.7MiB (67.9MB), run=1002-1018msec 00:29:39.957 00:29:39.957 Disk stats (read/write): 00:29:39.957 nvme0n1: ios=2073/2447, merge=0/0, ticks=41344/63582, in_queue=104926, util=97.39% 00:29:39.957 nvme0n2: ios=4815/5120, merge=0/0, ticks=25525/32460, in_queue=57985, util=97.56% 00:29:39.957 nvme0n3: ios=4183/4608, merge=0/0, ticks=24484/25008, in_queue=49492, util=97.81% 00:29:39.957 nvme0n4: ios=1536/2047, merge=0/0, ticks=32204/71534, in_queue=103738, util=89.68% 00:29:39.957 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:29:39.957 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3086513 00:29:39.957 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:29:39.957 11:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:29:39.957 [global] 00:29:39.957 thread=1 00:29:39.957 invalidate=1 00:29:39.957 rw=read 00:29:39.957 time_based=1 00:29:39.957 runtime=10 00:29:39.957 ioengine=libaio 00:29:39.957 direct=1 00:29:39.957 bs=4096 00:29:39.957 iodepth=1 00:29:39.957 norandommap=1 00:29:39.957 numjobs=1 00:29:39.957 00:29:39.957 [job0] 00:29:39.957 filename=/dev/nvme0n1 00:29:39.957 [job1] 00:29:39.957 filename=/dev/nvme0n2 00:29:39.957 [job2] 00:29:39.957 filename=/dev/nvme0n3 00:29:39.957 [job3] 00:29:39.957 filename=/dev/nvme0n4 00:29:39.957 Could not set queue depth (nvme0n1) 00:29:39.957 Could not set queue depth (nvme0n2) 00:29:39.957 Could not set queue depth (nvme0n3) 00:29:39.957 Could not set queue depth (nvme0n4) 00:29:39.957 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:39.957 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:39.957 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:39.957 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:39.957 fio-3.35 00:29:39.957 Starting 4 threads 00:29:43.238 11:48:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:29:43.238 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:29:43.238 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=581632, buflen=4096 00:29:43.238 fio: pid=3086614, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:43.238 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:43.238 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:29:43.238 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=335872, buflen=4096 00:29:43.238 fio: pid=3086613, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:43.495 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=62013440, buflen=4096 00:29:43.495 fio: pid=3086611, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:43.495 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:43.495 11:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:29:43.753 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=10399744, buflen=4096 00:29:43.753 fio: pid=3086612, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:43.753 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:43.753 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:29:43.753 00:29:43.753 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3086611: Fri Nov 15 11:48:24 2024 00:29:43.753 read: IOPS=4352, BW=17.0MiB/s (17.8MB/s)(59.1MiB/3479msec) 00:29:43.753 slat (usec): min=3, max=15774, avg=10.84, stdev=205.41 00:29:43.753 clat (usec): min=165, max=1412, avg=215.40, stdev=45.09 00:29:43.753 lat (usec): min=175, max=16120, avg=226.24, stdev=211.58 00:29:43.753 clat percentiles (usec): 00:29:43.753 | 1.00th=[ 182], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:29:43.753 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:29:43.753 | 70.00th=[ 215], 80.00th=[ 227], 90.00th=[ 258], 95.00th=[ 273], 00:29:43.753 | 99.00th=[ 424], 99.50th=[ 510], 99.90th=[ 562], 99.95th=[ 816], 00:29:43.753 | 99.99th=[ 1237] 00:29:43.753 bw ( KiB/s): min=16464, max=19296, per=93.70%, avg=17836.00, stdev=1046.03, samples=6 00:29:43.753 iops : min= 4116, max= 4824, avg=4459.00, stdev=261.51, samples=6 00:29:43.753 lat (usec) : 250=86.77%, 500=12.69%, 750=0.46%, 1000=0.03% 00:29:43.753 lat (msec) : 2=0.03% 00:29:43.753 cpu : usr=1.96%, sys=4.31%, ctx=15147, majf=0, minf=2 00:29:43.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:43.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.753 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.753 issued rwts: total=15141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:43.753 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3086612: Fri Nov 15 11:48:24 2024 00:29:43.753 read: IOPS=675, BW=2700KiB/s (2764kB/s)(9.92MiB/3762msec) 00:29:43.753 slat (usec): min=4, max=29900, avg=37.05, stdev=725.45 00:29:43.753 clat (usec): min=193, max=41462, avg=1432.76, stdev=6857.12 00:29:43.753 lat (usec): min=199, max=41478, avg=1469.83, stdev=6892.15 00:29:43.753 clat percentiles (usec): 00:29:43.753 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:29:43.753 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 241], 00:29:43.753 | 70.00th=[ 253], 80.00th=[ 273], 90.00th=[ 310], 95.00th=[ 371], 00:29:43.753 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:43.753 | 99.99th=[41681] 00:29:43.753 bw ( KiB/s): min= 96, max= 9700, per=10.77%, avg=2050.86, stdev=3479.92, samples=7 00:29:43.753 iops : min= 24, max= 2425, avg=512.71, stdev=869.98, samples=7 00:29:43.753 lat (usec) : 250=67.91%, 500=29.02%, 750=0.12% 00:29:43.753 lat (msec) : 50=2.91% 00:29:43.753 cpu : usr=0.51%, sys=0.85%, ctx=2545, majf=0, minf=1 00:29:43.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:43.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.753 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.753 issued rwts: total=2540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:43.753 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3086613: Fri Nov 15 11:48:24 2024 00:29:43.753 read: IOPS=25, BW=102KiB/s (104kB/s)(328KiB/3223msec) 00:29:43.753 slat (nsec): min=7940, max=38135, avg=20206.78, stdev=9318.98 00:29:43.753 clat (usec): min=238, max=41988, avg=38996.77, stdev=8790.18 00:29:43.753 lat (usec): min=252, max=42007, avg=39017.02, stdev=8789.05 00:29:43.753 clat percentiles (usec): 00:29:43.753 | 1.00th=[ 239], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:29:43.753 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:43.753 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:43.753 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:43.753 | 99.99th=[42206] 00:29:43.753 bw ( KiB/s): min= 96, max= 112, per=0.54%, avg=102.67, stdev= 6.02, samples=6 00:29:43.754 iops : min= 24, max= 28, avg=25.67, stdev= 1.51, samples=6 00:29:43.754 lat (usec) : 250=1.20%, 500=2.41%, 750=1.20% 00:29:43.754 lat (msec) : 50=93.98% 00:29:43.754 cpu : usr=0.09%, sys=0.00%, ctx=83, majf=0, minf=1 00:29:43.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:43.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.754 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.754 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:43.754 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3086614: Fri Nov 15 11:48:24 2024 00:29:43.754 read: IOPS=49, BW=195KiB/s (200kB/s)(568KiB/2906msec) 00:29:43.754 slat (nsec): min=6605, max=42474, avg=20674.30, stdev=8389.27 00:29:43.754 clat (usec): min=257, max=42516, avg=20275.06, stdev=20562.44 00:29:43.754 lat (usec): min=267, max=42533, avg=20295.78, stdev=20558.36 00:29:43.754 clat percentiles (usec): 00:29:43.754 | 1.00th=[ 260], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 314], 00:29:43.754 | 30.00th=[ 326], 40.00th=[ 400], 50.00th=[ 570], 60.00th=[41157], 00:29:43.754 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:29:43.754 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:29:43.754 | 99.99th=[42730] 00:29:43.754 bw ( KiB/s): min= 96, max= 344, per=1.09%, avg=208.00, stdev=121.46, samples=5 00:29:43.754 iops : min= 24, max= 86, avg=52.00, stdev=30.36, samples=5 00:29:43.754 lat (usec) : 500=45.45%, 750=5.59% 00:29:43.754 lat (msec) : 50=48.25% 00:29:43.754 cpu : usr=0.10%, sys=0.10%, ctx=143, majf=0, minf=2 00:29:43.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:43.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.754 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.754 issued rwts: total=143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:43.754 00:29:43.754 Run status group 0 (all jobs): 00:29:43.754 READ: bw=18.6MiB/s (19.5MB/s), 102KiB/s-17.0MiB/s (104kB/s-17.8MB/s), io=69.9MiB (73.3MB), run=2906-3762msec 00:29:43.754 00:29:43.754 Disk stats (read/write): 00:29:43.754 nvme0n1: ios=14710/0, merge=0/0, ticks=3005/0, in_queue=3005, util=94.68% 00:29:43.754 nvme0n2: ios=2037/0, merge=0/0, ticks=3502/0, in_queue=3502, util=95.18% 00:29:43.754 nvme0n3: ios=79/0, merge=0/0, ticks=3077/0, in_queue=3077, util=96.79% 00:29:43.754 nvme0n4: ios=141/0, merge=0/0, ticks=2837/0, in_queue=2837, util=96.71% 00:29:44.012 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:44.012 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:29:44.577 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:44.577 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:29:44.577 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:44.577 11:48:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:29:44.835 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:44.835 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:29:45.400 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:29:45.400 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3086513 00:29:45.400 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:29:45.400 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:45.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:45.400 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:45.400 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:29:45.400 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:45.400 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:45.400 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:45.400 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:45.400 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:29:45.400 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:29:45.400 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:29:45.401 nvmf hotplug test: fio failed as expected 00:29:45.401 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:45.659 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:29:45.659 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:29:45.659 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:29:45.660 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:29:45.660 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:29:45.660 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:45.660 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:29:45.660 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:45.660 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:29:45.660 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:45.660 11:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:45.660 rmmod nvme_tcp 00:29:45.660 rmmod nvme_fabrics 00:29:45.660 rmmod nvme_keyring 00:29:45.660 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:45.660 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:29:45.660 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:29:45.660 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3083997 ']' 00:29:45.660 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3083997 00:29:45.660 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3083997 ']' 00:29:45.660 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3083997 00:29:45.660 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:29:45.660 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:45.660 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3083997 00:29:45.660 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:45.660 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:45.660 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3083997' 00:29:45.660 killing process with pid 3083997 00:29:45.660 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3083997 00:29:45.660 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3083997 00:29:45.918 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:45.918 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:45.918 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:45.918 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:29:45.918 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:29:45.918 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:45.918 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:29:45.918 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:45.918 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:45.918 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.918 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.918 11:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:48.458 00:29:48.458 real 0m23.704s 00:29:48.458 user 1m7.333s 00:29:48.458 sys 0m9.919s 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:48.458 ************************************ 00:29:48.458 END TEST nvmf_fio_target 00:29:48.458 ************************************ 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:48.458 ************************************ 00:29:48.458 START TEST nvmf_bdevio 00:29:48.458 ************************************ 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:48.458 * Looking for test storage... 00:29:48.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:48.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.458 --rc genhtml_branch_coverage=1 00:29:48.458 --rc genhtml_function_coverage=1 00:29:48.458 --rc genhtml_legend=1 00:29:48.458 --rc geninfo_all_blocks=1 00:29:48.458 --rc geninfo_unexecuted_blocks=1 00:29:48.458 00:29:48.458 ' 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:48.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.458 --rc genhtml_branch_coverage=1 00:29:48.458 --rc genhtml_function_coverage=1 00:29:48.458 --rc genhtml_legend=1 00:29:48.458 --rc geninfo_all_blocks=1 00:29:48.458 --rc geninfo_unexecuted_blocks=1 00:29:48.458 00:29:48.458 ' 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:48.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.458 --rc genhtml_branch_coverage=1 00:29:48.458 --rc genhtml_function_coverage=1 00:29:48.458 --rc genhtml_legend=1 00:29:48.458 --rc geninfo_all_blocks=1 00:29:48.458 --rc geninfo_unexecuted_blocks=1 00:29:48.458 00:29:48.458 ' 00:29:48.458 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:48.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.458 --rc genhtml_branch_coverage=1 00:29:48.458 --rc genhtml_function_coverage=1 00:29:48.458 --rc genhtml_legend=1 00:29:48.458 --rc geninfo_all_blocks=1 00:29:48.458 --rc geninfo_unexecuted_blocks=1 00:29:48.458 00:29:48.458 ' 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:29:48.459 11:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:50.363 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:50.363 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:50.363 Found net devices under 0000:09:00.0: cvl_0_0 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:50.363 Found net devices under 0000:09:00.1: cvl_0_1 00:29:50.363 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:50.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:29:50.364 00:29:50.364 --- 10.0.0.2 ping statistics --- 00:29:50.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.364 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:29:50.364 00:29:50.364 --- 10.0.0.1 ping statistics --- 00:29:50.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.364 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3089238 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3089238 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3089238 ']' 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:50.364 11:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:50.623 [2024-11-15 11:48:30.797247] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:50.623 [2024-11-15 11:48:30.798290] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:29:50.623 [2024-11-15 11:48:30.798361] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.623 [2024-11-15 11:48:30.869274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:50.623 [2024-11-15 11:48:30.927174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.623 [2024-11-15 11:48:30.927226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.623 [2024-11-15 11:48:30.927253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.623 [2024-11-15 11:48:30.927264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.623 [2024-11-15 11:48:30.927274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.623 [2024-11-15 11:48:30.928945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:50.623 [2024-11-15 11:48:30.928995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:50.623 [2024-11-15 11:48:30.929038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:50.623 [2024-11-15 11:48:30.929041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.623 [2024-11-15 11:48:31.015050] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:50.623 [2024-11-15 11:48:31.015268] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:50.623 [2024-11-15 11:48:31.015568] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:50.623 [2024-11-15 11:48:31.016123] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:50.623 [2024-11-15 11:48:31.016392] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:50.623 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:50.623 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:29:50.623 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:50.623 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:50.623 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:50.883 [2024-11-15 11:48:31.065713] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:50.883 Malloc0 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:50.883 [2024-11-15 11:48:31.137876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:50.883 { 00:29:50.883 "params": { 00:29:50.883 "name": "Nvme$subsystem", 00:29:50.883 "trtype": "$TEST_TRANSPORT", 00:29:50.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.883 "adrfam": "ipv4", 00:29:50.883 "trsvcid": "$NVMF_PORT", 00:29:50.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.883 "hdgst": ${hdgst:-false}, 00:29:50.883 "ddgst": ${ddgst:-false} 00:29:50.883 }, 00:29:50.883 "method": "bdev_nvme_attach_controller" 00:29:50.883 } 00:29:50.883 EOF 00:29:50.883 )") 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:29:50.883 11:48:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:50.883 "params": { 00:29:50.883 "name": "Nvme1", 00:29:50.883 "trtype": "tcp", 00:29:50.883 "traddr": "10.0.0.2", 00:29:50.884 "adrfam": "ipv4", 00:29:50.884 "trsvcid": "4420", 00:29:50.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:50.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:50.884 "hdgst": false, 00:29:50.884 "ddgst": false 00:29:50.884 }, 00:29:50.884 "method": "bdev_nvme_attach_controller" 00:29:50.884 }' 00:29:50.884 [2024-11-15 11:48:31.187987] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:29:50.884 [2024-11-15 11:48:31.188062] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089262 ] 00:29:50.884 [2024-11-15 11:48:31.267484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:51.144 [2024-11-15 11:48:31.331719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.144 [2024-11-15 11:48:31.331772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:51.144 [2024-11-15 11:48:31.331776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.403 I/O targets: 00:29:51.403 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:29:51.403 00:29:51.403 00:29:51.403 CUnit - A unit testing framework for C - Version 2.1-3 00:29:51.403 http://cunit.sourceforge.net/ 00:29:51.403 00:29:51.403 00:29:51.403 Suite: bdevio tests on: Nvme1n1 00:29:51.403 Test: blockdev write read block ...passed 00:29:51.403 Test: blockdev write zeroes read block ...passed 00:29:51.403 Test: blockdev write zeroes read no split ...passed 00:29:51.403 Test: blockdev write zeroes read split ...passed 00:29:51.403 Test: blockdev write zeroes read split partial ...passed 00:29:51.403 Test: blockdev reset ...[2024-11-15 11:48:31.777637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:51.403 [2024-11-15 11:48:31.777742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2266640 (9): Bad file descriptor 00:29:51.660 [2024-11-15 11:48:31.869691] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:29:51.660 passed 00:29:51.660 Test: blockdev write read 8 blocks ...passed 00:29:51.660 Test: blockdev write read size > 128k ...passed 00:29:51.660 Test: blockdev write read invalid size ...passed 00:29:51.660 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:51.660 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:51.660 Test: blockdev write read max offset ...passed 00:29:51.660 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:51.660 Test: blockdev writev readv 8 blocks ...passed 00:29:51.918 Test: blockdev writev readv 30 x 1block ...passed 00:29:51.918 Test: blockdev writev readv block ...passed 00:29:51.918 Test: blockdev writev readv size > 128k ...passed 00:29:51.918 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:51.918 Test: blockdev comparev and writev ...[2024-11-15 11:48:32.163430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:51.918 [2024-11-15 11:48:32.163464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.918 [2024-11-15 11:48:32.163489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:51.918 [2024-11-15 11:48:32.163507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:51.918 [2024-11-15 11:48:32.163901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:51.918 [2024-11-15 11:48:32.163927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:51.918 [2024-11-15 11:48:32.163950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:51.918 [2024-11-15 11:48:32.163967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:51.918 [2024-11-15 11:48:32.164362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:51.918 [2024-11-15 11:48:32.164386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:51.918 [2024-11-15 11:48:32.164408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:51.918 [2024-11-15 11:48:32.164425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:51.918 [2024-11-15 11:48:32.164803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:51.918 [2024-11-15 11:48:32.164828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:51.918 [2024-11-15 11:48:32.164849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:51.918 [2024-11-15 11:48:32.164866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:51.918 passed 00:29:51.918 Test: blockdev nvme passthru rw ...passed 00:29:51.918 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:48:32.246576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:51.918 [2024-11-15 11:48:32.246611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:51.918 [2024-11-15 11:48:32.246759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:51.918 [2024-11-15 11:48:32.246781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:51.918 [2024-11-15 11:48:32.246921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:51.918 [2024-11-15 11:48:32.246944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:51.918 [2024-11-15 11:48:32.247085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:51.918 [2024-11-15 11:48:32.247109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:51.918 passed 00:29:51.918 Test: blockdev nvme admin passthru ...passed 00:29:51.918 Test: blockdev copy ...passed 00:29:51.918 00:29:51.918 Run Summary: Type Total Ran Passed Failed Inactive 00:29:51.918 suites 1 1 n/a 0 0 00:29:51.918 tests 23 23 23 0 0 00:29:51.918 asserts 152 152 152 0 n/a 00:29:51.918 00:29:51.918 Elapsed time = 1.341 seconds 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:52.176 rmmod nvme_tcp 00:29:52.176 rmmod nvme_fabrics 00:29:52.176 rmmod nvme_keyring 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3089238 ']' 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3089238 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3089238 ']' 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3089238 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3089238 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3089238' 00:29:52.176 killing process with pid 3089238 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3089238 00:29:52.176 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3089238 00:29:52.434 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:52.434 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:52.434 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:52.434 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:29:52.434 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:29:52.434 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:52.434 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:29:52.434 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:52.434 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:52.434 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.434 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.434 11:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.969 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:54.969 00:29:54.969 real 0m6.460s 00:29:54.969 user 0m9.372s 00:29:54.969 sys 0m2.527s 00:29:54.969 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:54.969 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:54.969 ************************************ 00:29:54.969 END TEST nvmf_bdevio 00:29:54.969 ************************************ 00:29:54.969 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:29:54.969 00:29:54.969 real 3m55.751s 00:29:54.969 user 8m57.225s 00:29:54.969 sys 1m24.098s 00:29:54.969 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:54.969 11:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:54.969 ************************************ 00:29:54.969 END TEST nvmf_target_core_interrupt_mode 00:29:54.969 ************************************ 00:29:54.969 11:48:34 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:29:54.969 11:48:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:54.969 11:48:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:54.969 11:48:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:54.969 ************************************ 00:29:54.969 START TEST nvmf_interrupt 00:29:54.969 ************************************ 00:29:54.969 11:48:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:29:54.969 * Looking for test storage... 00:29:54.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:54.969 11:48:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:54.969 11:48:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:29:54.969 11:48:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:54.969 11:48:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:54.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.969 --rc genhtml_branch_coverage=1 00:29:54.969 --rc genhtml_function_coverage=1 00:29:54.969 --rc genhtml_legend=1 00:29:54.970 --rc geninfo_all_blocks=1 00:29:54.970 --rc geninfo_unexecuted_blocks=1 00:29:54.970 00:29:54.970 ' 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:54.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.970 --rc genhtml_branch_coverage=1 00:29:54.970 --rc genhtml_function_coverage=1 00:29:54.970 --rc genhtml_legend=1 00:29:54.970 --rc geninfo_all_blocks=1 00:29:54.970 --rc geninfo_unexecuted_blocks=1 00:29:54.970 00:29:54.970 ' 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:54.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.970 --rc genhtml_branch_coverage=1 00:29:54.970 --rc genhtml_function_coverage=1 00:29:54.970 --rc genhtml_legend=1 00:29:54.970 --rc geninfo_all_blocks=1 00:29:54.970 --rc geninfo_unexecuted_blocks=1 00:29:54.970 00:29:54.970 ' 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:54.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.970 --rc genhtml_branch_coverage=1 00:29:54.970 --rc genhtml_function_coverage=1 00:29:54.970 --rc genhtml_legend=1 00:29:54.970 --rc geninfo_all_blocks=1 00:29:54.970 --rc geninfo_unexecuted_blocks=1 00:29:54.970 00:29:54.970 ' 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:29:54.970 11:48:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.868 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:56.869 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:56.869 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:56.869 Found net devices under 0000:09:00.0: cvl_0_0 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:56.869 Found net devices under 0000:09:00.1: cvl_0_1 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.869 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:57.127 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:57.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:29:57.128 00:29:57.128 --- 10.0.0.2 ping statistics --- 00:29:57.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.128 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:57.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:29:57.128 00:29:57.128 --- 10.0.0.1 ping statistics --- 00:29:57.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.128 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3091476 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3091476 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3091476 ']' 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.128 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:57.128 [2024-11-15 11:48:37.416043] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:57.128 [2024-11-15 11:48:37.417066] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:29:57.128 [2024-11-15 11:48:37.417121] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.128 [2024-11-15 11:48:37.489425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:57.128 [2024-11-15 11:48:37.548458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.128 [2024-11-15 11:48:37.548525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.128 [2024-11-15 11:48:37.548539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.128 [2024-11-15 11:48:37.548566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.128 [2024-11-15 11:48:37.548576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.387 [2024-11-15 11:48:37.553325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.387 [2024-11-15 11:48:37.553336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.387 [2024-11-15 11:48:37.651169] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:57.387 [2024-11-15 11:48:37.651206] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:57.387 [2024-11-15 11:48:37.651468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:29:57.387 5000+0 records in 00:29:57.387 5000+0 records out 00:29:57.387 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0145631 s, 703 MB/s 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:57.387 AIO0 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:57.387 [2024-11-15 11:48:37.754053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:57.387 [2024-11-15 11:48:37.778247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3091476 0 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3091476 0 idle 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3091476 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3091476 -w 256 00:29:57.387 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3091476 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.28 reactor_0' 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3091476 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.28 reactor_0 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3091476 1 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3091476 1 idle 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3091476 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3091476 -w 256 00:29:57.646 11:48:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3091481 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3091481 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3091623 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3091476 0 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3091476 0 busy 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3091476 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3091476 -w 256 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3091476 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.28 reactor_0' 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3091476 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.28 reactor_0 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:57.904 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:57.905 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:29:57.905 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:29:57.905 11:48:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3091476 -w 256 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3091476 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.56 reactor_0' 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3091476 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.56 reactor_0 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3091476 1 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3091476 1 busy 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3091476 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3091476 -w 256 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3091481 root 20 0 128.2g 48384 34944 R 87.5 0.1 0:01.29 reactor_1' 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3091481 root 20 0 128.2g 48384 34944 R 87.5 0.1 0:01.29 reactor_1 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=87.5 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=87 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:59.281 11:48:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3091623 00:30:09.353 Initializing NVMe Controllers 00:30:09.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:09.353 Controller IO queue size 256, less than required. 00:30:09.353 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:09.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:09.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:09.353 Initialization complete. Launching workers. 00:30:09.353 ======================================================== 00:30:09.353 Latency(us) 00:30:09.353 Device Information : IOPS MiB/s Average min max 00:30:09.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13251.39 51.76 19332.32 4039.36 58284.45 00:30:09.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13710.58 53.56 18683.72 4523.93 23041.21 00:30:09.353 ======================================================== 00:30:09.353 Total : 26961.97 105.32 19002.49 4039.36 58284.45 00:30:09.353 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3091476 0 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3091476 0 idle 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3091476 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3091476 -w 256 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3091476 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.22 reactor_0' 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3091476 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.22 reactor_0 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3091476 1 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3091476 1 idle 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3091476 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3091476 -w 256 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:09.353 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3091481 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.97 reactor_1' 00:30:09.354 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3091481 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.97 reactor_1 00:30:09.354 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:09.354 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:09.354 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:09.354 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:09.354 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:09.354 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:09.354 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:09.354 11:48:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:09.354 11:48:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:09.354 11:48:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:30:09.354 11:48:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:30:09.354 11:48:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:09.354 11:48:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:30:09.354 11:48:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3091476 0 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3091476 0 idle 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3091476 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3091476 -w 256 00:30:10.733 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3091476 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.32 reactor_0' 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3091476 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.32 reactor_0 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3091476 1 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3091476 1 idle 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3091476 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3091476 -w 256 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3091481 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1' 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3091481 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:10.993 11:48:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:11.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:11.251 rmmod nvme_tcp 00:30:11.251 rmmod nvme_fabrics 00:30:11.251 rmmod nvme_keyring 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3091476 ']' 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3091476 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3091476 ']' 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3091476 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3091476 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3091476' 00:30:11.251 killing process with pid 3091476 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3091476 00:30:11.251 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3091476 00:30:11.510 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:11.510 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:11.510 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:11.510 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:30:11.510 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:30:11.510 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:11.510 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:30:11.510 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:11.510 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:11.510 11:48:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.510 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:11.510 11:48:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.045 11:48:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:14.045 00:30:14.045 real 0m18.943s 00:30:14.045 user 0m36.878s 00:30:14.045 sys 0m6.781s 00:30:14.045 11:48:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:14.045 11:48:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:14.045 ************************************ 00:30:14.045 END TEST nvmf_interrupt 00:30:14.045 ************************************ 00:30:14.045 00:30:14.045 real 25m0.413s 00:30:14.045 user 58m37.430s 00:30:14.045 sys 6m36.698s 00:30:14.045 11:48:53 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:14.045 11:48:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:14.045 ************************************ 00:30:14.045 END TEST nvmf_tcp 00:30:14.045 ************************************ 00:30:14.045 11:48:53 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:30:14.045 11:48:53 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:14.045 11:48:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:14.045 11:48:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:14.045 11:48:53 -- common/autotest_common.sh@10 -- # set +x 00:30:14.045 ************************************ 00:30:14.045 START TEST spdkcli_nvmf_tcp 00:30:14.045 ************************************ 00:30:14.045 11:48:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:14.045 * Looking for test storage... 00:30:14.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.045 --rc genhtml_branch_coverage=1 00:30:14.045 --rc genhtml_function_coverage=1 00:30:14.045 --rc genhtml_legend=1 00:30:14.045 --rc geninfo_all_blocks=1 00:30:14.045 --rc geninfo_unexecuted_blocks=1 00:30:14.045 00:30:14.045 ' 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.045 --rc genhtml_branch_coverage=1 00:30:14.045 --rc genhtml_function_coverage=1 00:30:14.045 --rc genhtml_legend=1 00:30:14.045 --rc geninfo_all_blocks=1 00:30:14.045 --rc geninfo_unexecuted_blocks=1 00:30:14.045 00:30:14.045 ' 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.045 --rc genhtml_branch_coverage=1 00:30:14.045 --rc genhtml_function_coverage=1 00:30:14.045 --rc genhtml_legend=1 00:30:14.045 --rc geninfo_all_blocks=1 00:30:14.045 --rc geninfo_unexecuted_blocks=1 00:30:14.045 00:30:14.045 ' 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.045 --rc genhtml_branch_coverage=1 00:30:14.045 --rc genhtml_function_coverage=1 00:30:14.045 --rc genhtml_legend=1 00:30:14.045 --rc geninfo_all_blocks=1 00:30:14.045 --rc geninfo_unexecuted_blocks=1 00:30:14.045 00:30:14.045 ' 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.045 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:14.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3093651 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3093651 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3093651 ']' 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:14.046 [2024-11-15 11:48:54.180099] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:30:14.046 [2024-11-15 11:48:54.180189] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093651 ] 00:30:14.046 [2024-11-15 11:48:54.247842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:14.046 [2024-11-15 11:48:54.305361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.046 [2024-11-15 11:48:54.305365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:14.046 11:48:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:14.046 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:14.046 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:14.046 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:14.046 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:14.046 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:14.046 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:14.046 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:14.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:14.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:14.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:14.046 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:14.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:14.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:14.046 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:14.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:14.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:14.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:14.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:14.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:14.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:14.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:14.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:14.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:14.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:14.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:14.046 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:14.046 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:14.046 ' 00:30:17.328 [2024-11-15 11:48:57.065367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.262 [2024-11-15 11:48:58.333756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:20.790 [2024-11-15 11:49:00.676917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:22.692 [2024-11-15 11:49:02.699105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:24.069 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:24.069 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:24.069 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:24.069 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:24.069 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:24.069 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:24.069 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:24.069 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:24.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:24.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:24.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:24.069 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:24.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:24.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:24.069 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:24.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:24.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:24.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:24.069 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:24.070 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:24.070 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:24.070 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:24.070 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:24.070 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:24.070 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:24.070 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:24.070 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:24.070 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:24.070 11:49:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:24.070 11:49:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:24.070 11:49:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:24.070 11:49:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:24.070 11:49:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:24.070 11:49:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:24.070 11:49:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:24.070 11:49:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:24.636 11:49:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:24.636 11:49:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:24.636 11:49:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:24.636 11:49:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:24.636 11:49:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:24.636 11:49:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:24.636 11:49:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:24.636 11:49:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:24.636 11:49:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:24.636 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:24.636 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:24.636 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:24.636 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:24.636 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:24.636 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:24.636 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:24.636 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:24.636 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:24.636 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:24.636 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:24.636 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:24.636 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:24.636 ' 00:30:29.903 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:29.903 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:29.903 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:29.903 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:29.903 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:29.903 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:29.903 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:29.903 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:29.903 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:29.903 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:29.903 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:29.903 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:29.903 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:29.903 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:29.903 11:49:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:29.903 11:49:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:29.903 11:49:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.903 11:49:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3093651 00:30:29.903 11:49:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3093651 ']' 00:30:29.903 11:49:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3093651 00:30:29.903 11:49:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:30:29.903 11:49:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:29.903 11:49:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3093651 00:30:30.162 11:49:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:30.162 11:49:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:30.162 11:49:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3093651' 00:30:30.162 killing process with pid 3093651 00:30:30.162 11:49:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3093651 00:30:30.162 11:49:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3093651 00:30:30.162 11:49:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:30.162 11:49:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:30.162 11:49:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3093651 ']' 00:30:30.162 11:49:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3093651 00:30:30.162 11:49:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3093651 ']' 00:30:30.162 11:49:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3093651 00:30:30.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3093651) - No such process 00:30:30.162 11:49:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3093651 is not found' 00:30:30.162 Process with pid 3093651 is not found 00:30:30.162 11:49:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:30.162 11:49:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:30.162 11:49:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:30.162 00:30:30.162 real 0m16.617s 00:30:30.162 user 0m35.392s 00:30:30.162 sys 0m0.757s 00:30:30.162 11:49:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:30.162 11:49:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:30.162 ************************************ 00:30:30.162 END TEST spdkcli_nvmf_tcp 00:30:30.162 ************************************ 00:30:30.421 11:49:10 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:30.421 11:49:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:30.422 11:49:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:30.422 11:49:10 -- common/autotest_common.sh@10 -- # set +x 00:30:30.422 ************************************ 00:30:30.422 START TEST nvmf_identify_passthru 00:30:30.422 ************************************ 00:30:30.422 11:49:10 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:30.422 * Looking for test storage... 00:30:30.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:30.422 11:49:10 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:30.422 11:49:10 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:30:30.422 11:49:10 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:30.422 11:49:10 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:30:30.422 11:49:10 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:30.422 11:49:10 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:30.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.422 --rc genhtml_branch_coverage=1 00:30:30.422 --rc genhtml_function_coverage=1 00:30:30.422 --rc genhtml_legend=1 00:30:30.422 --rc geninfo_all_blocks=1 00:30:30.422 --rc geninfo_unexecuted_blocks=1 00:30:30.422 00:30:30.422 ' 00:30:30.422 11:49:10 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:30.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.422 --rc genhtml_branch_coverage=1 00:30:30.422 --rc genhtml_function_coverage=1 00:30:30.422 --rc genhtml_legend=1 00:30:30.422 --rc geninfo_all_blocks=1 00:30:30.422 --rc geninfo_unexecuted_blocks=1 00:30:30.422 00:30:30.422 ' 00:30:30.422 11:49:10 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:30.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.422 --rc genhtml_branch_coverage=1 00:30:30.422 --rc genhtml_function_coverage=1 00:30:30.422 --rc genhtml_legend=1 00:30:30.422 --rc geninfo_all_blocks=1 00:30:30.422 --rc geninfo_unexecuted_blocks=1 00:30:30.422 00:30:30.422 ' 00:30:30.422 11:49:10 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:30.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.422 --rc genhtml_branch_coverage=1 00:30:30.422 --rc genhtml_function_coverage=1 00:30:30.422 --rc genhtml_legend=1 00:30:30.422 --rc geninfo_all_blocks=1 00:30:30.422 --rc geninfo_unexecuted_blocks=1 00:30:30.422 00:30:30.422 ' 00:30:30.422 11:49:10 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.422 11:49:10 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.422 11:49:10 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.422 11:49:10 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.422 11:49:10 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:30.422 11:49:10 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:30.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:30.422 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:30.422 11:49:10 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.422 11:49:10 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.422 11:49:10 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.423 11:49:10 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.423 11:49:10 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.423 11:49:10 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:30.423 11:49:10 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.423 11:49:10 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:30.423 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:30.423 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:30.423 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:30.423 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:30.423 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:30.423 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.423 11:49:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:30.423 11:49:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.423 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:30.423 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:30.423 11:49:10 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:30:30.423 11:49:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:32.955 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:32.955 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:32.955 Found net devices under 0000:09:00.0: cvl_0_0 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:32.955 Found net devices under 0000:09:00.1: cvl_0_1 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:32.955 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:32.956 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:32.956 11:49:12 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:32.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:30:32.956 00:30:32.956 --- 10.0.0.2 ping statistics --- 00:30:32.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.956 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:32.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:30:32.956 00:30:32.956 --- 10.0.0.1 ping statistics --- 00:30:32.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.956 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:32.956 11:49:13 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:32.956 11:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:32.956 11:49:13 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:32.956 11:49:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.956 11:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:32.956 11:49:13 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:32.956 11:49:13 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:30:32.956 11:49:13 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:30:32.956 11:49:13 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:30:32.956 11:49:13 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:32.956 11:49:13 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:30:32.956 11:49:13 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:32.956 11:49:13 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:32.956 11:49:13 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:32.956 11:49:13 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:32.956 11:49:13 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:30:32.956 11:49:13 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:0b:00.0 00:30:32.956 11:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:30:32.956 11:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:30:32.956 11:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:30:32.956 11:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:32.956 11:49:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:37.138 11:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:30:37.138 11:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:30:37.138 11:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:37.138 11:49:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:41.320 11:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:41.320 11:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:41.320 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:41.320 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:41.320 11:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:41.320 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:41.320 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:41.320 11:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3098303 00:30:41.320 11:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:41.320 11:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:41.320 11:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3098303 00:30:41.320 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3098303 ']' 00:30:41.320 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.320 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:41.320 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.320 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:41.320 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:41.320 [2024-11-15 11:49:21.600416] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:30:41.320 [2024-11-15 11:49:21.600501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:41.320 [2024-11-15 11:49:21.671528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:41.320 [2024-11-15 11:49:21.730673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:41.320 [2024-11-15 11:49:21.730725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:41.320 [2024-11-15 11:49:21.730739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:41.320 [2024-11-15 11:49:21.730750] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:41.320 [2024-11-15 11:49:21.730760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:41.320 [2024-11-15 11:49:21.732246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.320 [2024-11-15 11:49:21.732324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:41.320 [2024-11-15 11:49:21.732381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:41.320 [2024-11-15 11:49:21.732384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.578 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:41.578 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:30:41.578 11:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:41.578 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.578 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:41.578 INFO: Log level set to 20 00:30:41.578 INFO: Requests: 00:30:41.578 { 00:30:41.578 "jsonrpc": "2.0", 00:30:41.578 "method": "nvmf_set_config", 00:30:41.578 "id": 1, 00:30:41.578 "params": { 00:30:41.578 "admin_cmd_passthru": { 00:30:41.578 "identify_ctrlr": true 00:30:41.578 } 00:30:41.578 } 00:30:41.578 } 00:30:41.578 00:30:41.578 INFO: response: 00:30:41.579 { 00:30:41.579 "jsonrpc": "2.0", 00:30:41.579 "id": 1, 00:30:41.579 "result": true 00:30:41.579 } 00:30:41.579 00:30:41.579 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.579 11:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:41.579 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.579 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:41.579 INFO: Setting log level to 20 00:30:41.579 INFO: Setting log level to 20 00:30:41.579 INFO: Log level set to 20 00:30:41.579 INFO: Log level set to 20 00:30:41.579 INFO: Requests: 00:30:41.579 { 00:30:41.579 "jsonrpc": "2.0", 00:30:41.579 "method": "framework_start_init", 00:30:41.579 "id": 1 00:30:41.579 } 00:30:41.579 00:30:41.579 INFO: Requests: 00:30:41.579 { 00:30:41.579 "jsonrpc": "2.0", 00:30:41.579 "method": "framework_start_init", 00:30:41.579 "id": 1 00:30:41.579 } 00:30:41.579 00:30:41.579 [2024-11-15 11:49:21.942948] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:41.579 INFO: response: 00:30:41.579 { 00:30:41.579 "jsonrpc": "2.0", 00:30:41.579 "id": 1, 00:30:41.579 "result": true 00:30:41.579 } 00:30:41.579 00:30:41.579 INFO: response: 00:30:41.579 { 00:30:41.579 "jsonrpc": "2.0", 00:30:41.579 "id": 1, 00:30:41.579 "result": true 00:30:41.579 } 00:30:41.579 00:30:41.579 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.579 11:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:41.579 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.579 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:41.579 INFO: Setting log level to 40 00:30:41.579 INFO: Setting log level to 40 00:30:41.579 INFO: Setting log level to 40 00:30:41.579 [2024-11-15 11:49:21.953124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.579 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.579 11:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:41.579 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:41.579 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:41.579 11:49:21 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:30:41.579 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.579 11:49:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.860 Nvme0n1 00:30:44.860 11:49:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.860 11:49:24 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:44.860 11:49:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.860 11:49:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.860 11:49:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.861 11:49:24 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:44.861 11:49:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.861 11:49:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.861 11:49:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.861 11:49:24 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.861 11:49:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.861 11:49:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.861 [2024-11-15 11:49:24.869644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.861 11:49:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.861 11:49:24 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:44.861 11:49:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.861 11:49:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.861 [ 00:30:44.861 { 00:30:44.861 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:44.861 "subtype": "Discovery", 00:30:44.861 "listen_addresses": [], 00:30:44.861 "allow_any_host": true, 00:30:44.861 "hosts": [] 00:30:44.861 }, 00:30:44.861 { 00:30:44.861 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.861 "subtype": "NVMe", 00:30:44.861 "listen_addresses": [ 00:30:44.861 { 00:30:44.861 "trtype": "TCP", 00:30:44.861 "adrfam": "IPv4", 00:30:44.861 "traddr": "10.0.0.2", 00:30:44.861 "trsvcid": "4420" 00:30:44.861 } 00:30:44.861 ], 00:30:44.861 "allow_any_host": true, 00:30:44.861 "hosts": [], 00:30:44.861 "serial_number": "SPDK00000000000001", 00:30:44.861 "model_number": "SPDK bdev Controller", 00:30:44.861 "max_namespaces": 1, 00:30:44.861 "min_cntlid": 1, 00:30:44.861 "max_cntlid": 65519, 00:30:44.861 "namespaces": [ 00:30:44.861 { 00:30:44.861 "nsid": 1, 00:30:44.861 "bdev_name": "Nvme0n1", 00:30:44.861 "name": "Nvme0n1", 00:30:44.861 "nguid": "59E922B551FC49B99686943B8FA68079", 00:30:44.861 "uuid": "59e922b5-51fc-49b9-9686-943b8fa68079" 00:30:44.861 } 00:30:44.861 ] 00:30:44.861 } 00:30:44.861 ] 00:30:44.861 11:49:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.861 11:49:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:44.861 11:49:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:44.861 11:49:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:44.861 11:49:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:30:44.861 11:49:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:44.861 11:49:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:44.861 11:49:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:44.861 11:49:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:44.861 11:49:25 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:30:44.861 11:49:25 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:44.861 11:49:25 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:44.861 11:49:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:44.861 11:49:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:44.861 11:49:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:44.861 11:49:25 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:44.861 11:49:25 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:44.861 11:49:25 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:44.861 11:49:25 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:30:44.861 11:49:25 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:44.861 11:49:25 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:30:44.861 11:49:25 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:44.861 11:49:25 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:45.118 rmmod nvme_tcp 00:30:45.118 rmmod nvme_fabrics 00:30:45.118 rmmod nvme_keyring 00:30:45.118 11:49:25 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:45.118 11:49:25 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:30:45.118 11:49:25 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:30:45.118 11:49:25 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3098303 ']' 00:30:45.118 11:49:25 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3098303 00:30:45.118 11:49:25 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3098303 ']' 00:30:45.118 11:49:25 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3098303 00:30:45.118 11:49:25 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:30:45.118 11:49:25 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:45.118 11:49:25 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3098303 00:30:45.118 11:49:25 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:45.118 11:49:25 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:45.118 11:49:25 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3098303' 00:30:45.118 killing process with pid 3098303 00:30:45.118 11:49:25 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3098303 00:30:45.118 11:49:25 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3098303 00:30:47.019 11:49:26 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:47.019 11:49:26 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:47.019 11:49:26 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:47.019 11:49:26 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:30:47.019 11:49:26 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:47.019 11:49:26 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:30:47.019 11:49:26 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:30:47.019 11:49:26 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:47.019 11:49:26 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:47.019 11:49:26 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.019 11:49:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:47.019 11:49:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.924 11:49:28 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:48.924 00:30:48.924 real 0m18.369s 00:30:48.924 user 0m26.584s 00:30:48.924 sys 0m3.292s 00:30:48.924 11:49:28 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.924 11:49:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:48.924 ************************************ 00:30:48.924 END TEST nvmf_identify_passthru 00:30:48.924 ************************************ 00:30:48.924 11:49:29 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:48.924 11:49:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:48.924 11:49:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:48.924 11:49:29 -- common/autotest_common.sh@10 -- # set +x 00:30:48.924 ************************************ 00:30:48.924 START TEST nvmf_dif 00:30:48.924 ************************************ 00:30:48.924 11:49:29 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:48.924 * Looking for test storage... 00:30:48.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:48.924 11:49:29 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:48.924 11:49:29 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:30:48.924 11:49:29 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:48.924 11:49:29 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:30:48.924 11:49:29 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:48.924 11:49:29 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:48.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.924 --rc genhtml_branch_coverage=1 00:30:48.924 --rc genhtml_function_coverage=1 00:30:48.924 --rc genhtml_legend=1 00:30:48.924 --rc geninfo_all_blocks=1 00:30:48.924 --rc geninfo_unexecuted_blocks=1 00:30:48.924 00:30:48.924 ' 00:30:48.924 11:49:29 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:48.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.924 --rc genhtml_branch_coverage=1 00:30:48.924 --rc genhtml_function_coverage=1 00:30:48.924 --rc genhtml_legend=1 00:30:48.924 --rc geninfo_all_blocks=1 00:30:48.924 --rc geninfo_unexecuted_blocks=1 00:30:48.924 00:30:48.924 ' 00:30:48.924 11:49:29 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:48.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.924 --rc genhtml_branch_coverage=1 00:30:48.924 --rc genhtml_function_coverage=1 00:30:48.924 --rc genhtml_legend=1 00:30:48.924 --rc geninfo_all_blocks=1 00:30:48.924 --rc geninfo_unexecuted_blocks=1 00:30:48.924 00:30:48.924 ' 00:30:48.924 11:49:29 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:48.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:48.924 --rc genhtml_branch_coverage=1 00:30:48.924 --rc genhtml_function_coverage=1 00:30:48.924 --rc genhtml_legend=1 00:30:48.924 --rc geninfo_all_blocks=1 00:30:48.924 --rc geninfo_unexecuted_blocks=1 00:30:48.924 00:30:48.924 ' 00:30:48.924 11:49:29 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.924 11:49:29 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.924 11:49:29 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.924 11:49:29 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.924 11:49:29 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.924 11:49:29 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:48.924 11:49:29 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:48.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:48.924 11:49:29 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:48.924 11:49:29 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:48.924 11:49:29 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:48.924 11:49:29 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:48.924 11:49:29 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.924 11:49:29 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:48.924 11:49:29 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:48.924 11:49:29 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:30:48.924 11:49:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:51.461 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:51.461 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:51.461 Found net devices under 0000:09:00.0: cvl_0_0 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:51.461 Found net devices under 0000:09:00.1: cvl_0_1 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:51.461 11:49:31 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:51.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:51.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:30:51.462 00:30:51.462 --- 10.0.0.2 ping statistics --- 00:30:51.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.462 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:51.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:51.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:30:51.462 00:30:51.462 --- 10.0.0.1 ping statistics --- 00:30:51.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.462 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:30:51.462 11:49:31 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:52.396 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:52.396 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:52.396 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:52.396 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:52.396 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:52.396 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:52.396 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:52.396 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:52.396 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:52.396 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:52.396 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:52.396 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:52.396 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:52.396 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:52.396 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:52.396 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:52.396 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:52.396 11:49:32 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:52.396 11:49:32 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:52.396 11:49:32 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:52.396 11:49:32 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:52.396 11:49:32 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:52.396 11:49:32 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:52.397 11:49:32 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:52.397 11:49:32 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:52.397 11:49:32 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:52.397 11:49:32 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:52.397 11:49:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:52.397 11:49:32 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3101458 00:30:52.397 11:49:32 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:52.397 11:49:32 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3101458 00:30:52.397 11:49:32 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3101458 ']' 00:30:52.397 11:49:32 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.397 11:49:32 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:52.397 11:49:32 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.397 11:49:32 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:52.397 11:49:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:52.677 [2024-11-15 11:49:32.858913] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:30:52.677 [2024-11-15 11:49:32.858995] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:52.677 [2024-11-15 11:49:32.929932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.677 [2024-11-15 11:49:32.986157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:52.677 [2024-11-15 11:49:32.986213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:52.677 [2024-11-15 11:49:32.986240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:52.677 [2024-11-15 11:49:32.986251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:52.677 [2024-11-15 11:49:32.986261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:52.677 [2024-11-15 11:49:32.986884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.975 11:49:33 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:52.975 11:49:33 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:30:52.975 11:49:33 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:52.975 11:49:33 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:52.975 11:49:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:52.975 11:49:33 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.975 11:49:33 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:52.975 11:49:33 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:52.975 11:49:33 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.975 11:49:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:52.975 [2024-11-15 11:49:33.125126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.975 11:49:33 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.975 11:49:33 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:52.975 11:49:33 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:52.975 11:49:33 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:52.975 11:49:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:52.975 ************************************ 00:30:52.975 START TEST fio_dif_1_default 00:30:52.975 ************************************ 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:52.975 bdev_null0 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:52.975 [2024-11-15 11:49:33.181433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:52.975 { 00:30:52.975 "params": { 00:30:52.975 "name": "Nvme$subsystem", 00:30:52.975 "trtype": "$TEST_TRANSPORT", 00:30:52.975 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.975 "adrfam": "ipv4", 00:30:52.975 "trsvcid": "$NVMF_PORT", 00:30:52.975 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.975 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.975 "hdgst": ${hdgst:-false}, 00:30:52.975 "ddgst": ${ddgst:-false} 00:30:52.975 }, 00:30:52.975 "method": "bdev_nvme_attach_controller" 00:30:52.975 } 00:30:52.975 EOF 00:30:52.975 )") 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:52.975 "params": { 00:30:52.975 "name": "Nvme0", 00:30:52.975 "trtype": "tcp", 00:30:52.975 "traddr": "10.0.0.2", 00:30:52.975 "adrfam": "ipv4", 00:30:52.975 "trsvcid": "4420", 00:30:52.975 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:52.975 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:52.975 "hdgst": false, 00:30:52.975 "ddgst": false 00:30:52.975 }, 00:30:52.975 "method": "bdev_nvme_attach_controller" 00:30:52.975 }' 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:52.975 11:49:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.249 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:53.249 fio-3.35 00:30:53.249 Starting 1 thread 00:31:05.443 00:31:05.443 filename0: (groupid=0, jobs=1): err= 0: pid=3101715: Fri Nov 15 11:49:44 2024 00:31:05.443 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:31:05.443 slat (nsec): min=4597, max=46306, avg=9238.63, stdev=2661.92 00:31:05.443 clat (usec): min=40828, max=48680, avg=41002.83, stdev=493.01 00:31:05.443 lat (usec): min=40836, max=48693, avg=41012.07, stdev=492.91 00:31:05.443 clat percentiles (usec): 00:31:05.443 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:05.443 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:05.443 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:05.443 | 99.00th=[41157], 99.50th=[41157], 99.90th=[48497], 99.95th=[48497], 00:31:05.443 | 99.99th=[48497] 00:31:05.443 bw ( KiB/s): min= 384, max= 416, per=99.50%, avg=388.80, stdev=11.72, samples=20 00:31:05.443 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:31:05.443 lat (msec) : 50=100.00% 00:31:05.443 cpu : usr=90.79%, sys=8.94%, ctx=13, majf=0, minf=227 00:31:05.443 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.443 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.443 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.443 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:05.443 00:31:05.443 Run status group 0 (all jobs): 00:31:05.443 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10012-10012msec 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.443 00:31:05.443 real 0m11.217s 00:31:05.443 user 0m10.369s 00:31:05.443 sys 0m1.162s 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:05.443 ************************************ 00:31:05.443 END TEST fio_dif_1_default 00:31:05.443 ************************************ 00:31:05.443 11:49:44 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:05.443 11:49:44 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:05.443 11:49:44 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:05.443 11:49:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:05.443 ************************************ 00:31:05.443 START TEST fio_dif_1_multi_subsystems 00:31:05.443 ************************************ 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.443 bdev_null0 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.443 [2024-11-15 11:49:44.449538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.443 bdev_null1 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.443 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:05.444 { 00:31:05.444 "params": { 00:31:05.444 "name": "Nvme$subsystem", 00:31:05.444 "trtype": "$TEST_TRANSPORT", 00:31:05.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.444 "adrfam": "ipv4", 00:31:05.444 "trsvcid": "$NVMF_PORT", 00:31:05.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.444 "hdgst": ${hdgst:-false}, 00:31:05.444 "ddgst": ${ddgst:-false} 00:31:05.444 }, 00:31:05.444 "method": "bdev_nvme_attach_controller" 00:31:05.444 } 00:31:05.444 EOF 00:31:05.444 )") 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:05.444 { 00:31:05.444 "params": { 00:31:05.444 "name": "Nvme$subsystem", 00:31:05.444 "trtype": "$TEST_TRANSPORT", 00:31:05.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.444 "adrfam": "ipv4", 00:31:05.444 "trsvcid": "$NVMF_PORT", 00:31:05.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.444 "hdgst": ${hdgst:-false}, 00:31:05.444 "ddgst": ${ddgst:-false} 00:31:05.444 }, 00:31:05.444 "method": "bdev_nvme_attach_controller" 00:31:05.444 } 00:31:05.444 EOF 00:31:05.444 )") 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:05.444 "params": { 00:31:05.444 "name": "Nvme0", 00:31:05.444 "trtype": "tcp", 00:31:05.444 "traddr": "10.0.0.2", 00:31:05.444 "adrfam": "ipv4", 00:31:05.444 "trsvcid": "4420", 00:31:05.444 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:05.444 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:05.444 "hdgst": false, 00:31:05.444 "ddgst": false 00:31:05.444 }, 00:31:05.444 "method": "bdev_nvme_attach_controller" 00:31:05.444 },{ 00:31:05.444 "params": { 00:31:05.444 "name": "Nvme1", 00:31:05.444 "trtype": "tcp", 00:31:05.444 "traddr": "10.0.0.2", 00:31:05.444 "adrfam": "ipv4", 00:31:05.444 "trsvcid": "4420", 00:31:05.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:05.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:05.444 "hdgst": false, 00:31:05.444 "ddgst": false 00:31:05.444 }, 00:31:05.444 "method": "bdev_nvme_attach_controller" 00:31:05.444 }' 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:05.444 11:49:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.444 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:05.444 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:05.444 fio-3.35 00:31:05.444 Starting 2 threads 00:31:15.414 00:31:15.414 filename0: (groupid=0, jobs=1): err= 0: pid=3103099: Fri Nov 15 11:49:55 2024 00:31:15.414 read: IOPS=103, BW=414KiB/s (423kB/s)(4144KiB/10021msec) 00:31:15.414 slat (nsec): min=6944, max=42359, avg=9170.93, stdev=3416.54 00:31:15.414 clat (usec): min=563, max=42384, avg=38660.71, stdev=9438.02 00:31:15.414 lat (usec): min=570, max=42426, avg=38669.88, stdev=9438.01 00:31:15.414 clat percentiles (usec): 00:31:15.414 | 1.00th=[ 594], 5.00th=[ 652], 10.00th=[41157], 20.00th=[41157], 00:31:15.414 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:15.414 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:15.414 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:15.414 | 99.99th=[42206] 00:31:15.414 bw ( KiB/s): min= 384, max= 480, per=50.50%, avg=412.80, stdev=34.28, samples=20 00:31:15.414 iops : min= 96, max= 120, avg=103.20, stdev= 8.57, samples=20 00:31:15.414 lat (usec) : 750=5.79% 00:31:15.414 lat (msec) : 50=94.21% 00:31:15.414 cpu : usr=95.35%, sys=4.32%, ctx=16, majf=0, minf=106 00:31:15.414 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:15.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.414 issued rwts: total=1036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.414 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:15.414 filename1: (groupid=0, jobs=1): err= 0: pid=3103100: Fri Nov 15 11:49:55 2024 00:31:15.414 read: IOPS=100, BW=403KiB/s (412kB/s)(4032KiB/10014msec) 00:31:15.414 slat (nsec): min=6944, max=76790, avg=9200.53, stdev=4222.37 00:31:15.414 clat (usec): min=606, max=42327, avg=39707.05, stdev=7074.09 00:31:15.414 lat (usec): min=615, max=42355, avg=39716.25, stdev=7073.87 00:31:15.414 clat percentiles (usec): 00:31:15.414 | 1.00th=[ 644], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:15.414 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:15.414 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:15.414 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:31:15.414 | 99.99th=[42206] 00:31:15.414 bw ( KiB/s): min= 384, max= 448, per=49.15%, avg=401.60, stdev=24.29, samples=20 00:31:15.414 iops : min= 96, max= 112, avg=100.40, stdev= 6.07, samples=20 00:31:15.414 lat (usec) : 750=3.17% 00:31:15.414 lat (msec) : 50=96.83% 00:31:15.414 cpu : usr=95.22%, sys=4.46%, ctx=13, majf=0, minf=221 00:31:15.414 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:15.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.414 issued rwts: total=1008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.414 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:15.414 00:31:15.414 Run status group 0 (all jobs): 00:31:15.414 READ: bw=816KiB/s (835kB/s), 403KiB/s-414KiB/s (412kB/s-423kB/s), io=8176KiB (8372kB), run=10014-10021msec 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.672 00:31:15.672 real 0m11.567s 00:31:15.672 user 0m20.592s 00:31:15.672 sys 0m1.183s 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:15.672 11:49:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:15.672 ************************************ 00:31:15.672 END TEST fio_dif_1_multi_subsystems 00:31:15.672 ************************************ 00:31:15.672 11:49:56 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:15.672 11:49:56 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:15.672 11:49:56 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:15.672 11:49:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:15.672 ************************************ 00:31:15.672 START TEST fio_dif_rand_params 00:31:15.672 ************************************ 00:31:15.672 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:31:15.672 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:15.672 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:15.672 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:15.672 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:15.672 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:15.672 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:15.672 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:15.672 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:15.672 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:15.672 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:15.672 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:15.672 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:15.672 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:15.672 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.672 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.672 bdev_null0 00:31:15.672 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.673 [2024-11-15 11:49:56.070490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:15.673 { 00:31:15.673 "params": { 00:31:15.673 "name": "Nvme$subsystem", 00:31:15.673 "trtype": "$TEST_TRANSPORT", 00:31:15.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:15.673 "adrfam": "ipv4", 00:31:15.673 "trsvcid": "$NVMF_PORT", 00:31:15.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:15.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:15.673 "hdgst": ${hdgst:-false}, 00:31:15.673 "ddgst": ${ddgst:-false} 00:31:15.673 }, 00:31:15.673 "method": "bdev_nvme_attach_controller" 00:31:15.673 } 00:31:15.673 EOF 00:31:15.673 )") 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:15.673 11:49:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:15.673 "params": { 00:31:15.673 "name": "Nvme0", 00:31:15.673 "trtype": "tcp", 00:31:15.673 "traddr": "10.0.0.2", 00:31:15.673 "adrfam": "ipv4", 00:31:15.673 "trsvcid": "4420", 00:31:15.673 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:15.673 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:15.673 "hdgst": false, 00:31:15.673 "ddgst": false 00:31:15.673 }, 00:31:15.673 "method": "bdev_nvme_attach_controller" 00:31:15.673 }' 00:31:15.931 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:15.931 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:15.931 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:15.931 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:15.931 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:15.931 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:15.931 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:15.931 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:15.931 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:15.931 11:49:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:15.931 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:15.931 ... 00:31:15.931 fio-3.35 00:31:15.931 Starting 3 threads 00:31:22.487 00:31:22.487 filename0: (groupid=0, jobs=1): err= 0: pid=3104496: Fri Nov 15 11:50:01 2024 00:31:22.487 read: IOPS=239, BW=29.9MiB/s (31.4MB/s)(150MiB/5005msec) 00:31:22.487 slat (nsec): min=4232, max=26880, avg=13740.69, stdev=1535.41 00:31:22.487 clat (usec): min=4574, max=51359, avg=12501.64, stdev=4829.70 00:31:22.487 lat (usec): min=4582, max=51371, avg=12515.38, stdev=4829.50 00:31:22.487 clat percentiles (usec): 00:31:22.487 | 1.00th=[ 7373], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10683], 00:31:22.487 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12387], 00:31:22.487 | 70.00th=[12780], 80.00th=[13304], 90.00th=[14222], 95.00th=[14877], 00:31:22.487 | 99.00th=[48497], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:31:22.487 | 99.99th=[51119] 00:31:22.487 bw ( KiB/s): min=21248, max=33024, per=34.57%, avg=30643.20, stdev=3438.94, samples=10 00:31:22.487 iops : min= 166, max= 258, avg=239.40, stdev=26.87, samples=10 00:31:22.487 lat (msec) : 10=10.09%, 20=88.41%, 50=0.83%, 100=0.67% 00:31:22.487 cpu : usr=93.94%, sys=5.54%, ctx=7, majf=0, minf=75 00:31:22.487 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:22.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.487 issued rwts: total=1199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.487 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:22.487 filename0: (groupid=0, jobs=1): err= 0: pid=3104497: Fri Nov 15 11:50:01 2024 00:31:22.487 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(142MiB/5045msec) 00:31:22.487 slat (nsec): min=4499, max=39619, avg=15848.64, stdev=3612.87 00:31:22.487 clat (usec): min=6578, max=54937, avg=13228.88, stdev=5217.63 00:31:22.487 lat (usec): min=6591, max=54955, avg=13244.73, stdev=5217.42 00:31:22.487 clat percentiles (usec): 00:31:22.487 | 1.00th=[ 7242], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11076], 00:31:22.487 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12649], 60.00th=[13173], 00:31:22.487 | 70.00th=[13698], 80.00th=[14222], 90.00th=[15139], 95.00th=[15926], 00:31:22.487 | 99.00th=[49546], 99.50th=[51119], 99.90th=[52167], 99.95th=[54789], 00:31:22.487 | 99.99th=[54789] 00:31:22.487 bw ( KiB/s): min=19968, max=32000, per=32.83%, avg=29107.20, stdev=3518.49, samples=10 00:31:22.487 iops : min= 156, max= 250, avg=227.40, stdev=27.49, samples=10 00:31:22.487 lat (msec) : 10=6.94%, 20=91.31%, 50=0.79%, 100=0.97% 00:31:22.487 cpu : usr=93.87%, sys=5.55%, ctx=10, majf=0, minf=86 00:31:22.487 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:22.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.487 issued rwts: total=1139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.487 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:22.487 filename0: (groupid=0, jobs=1): err= 0: pid=3104498: Fri Nov 15 11:50:01 2024 00:31:22.487 read: IOPS=229, BW=28.6MiB/s (30.0MB/s)(145MiB/5044msec) 00:31:22.487 slat (nsec): min=4284, max=53776, avg=16469.14, stdev=5501.87 00:31:22.487 clat (usec): min=4155, max=55557, avg=13033.68, stdev=4796.84 00:31:22.487 lat (usec): min=4167, max=55571, avg=13050.15, stdev=4796.80 00:31:22.487 clat percentiles (usec): 00:31:22.487 | 1.00th=[ 4686], 5.00th=[ 7308], 10.00th=[ 9765], 20.00th=[11076], 00:31:22.487 | 30.00th=[11863], 40.00th=[12518], 50.00th=[12911], 60.00th=[13435], 00:31:22.487 | 70.00th=[13960], 80.00th=[14615], 90.00th=[15401], 95.00th=[15926], 00:31:22.487 | 99.00th=[46924], 99.50th=[51643], 99.90th=[54789], 99.95th=[55313], 00:31:22.487 | 99.99th=[55313] 00:31:22.487 bw ( KiB/s): min=25856, max=33792, per=33.30%, avg=29516.80, stdev=2413.74, samples=10 00:31:22.487 iops : min= 202, max= 264, avg=230.60, stdev=18.86, samples=10 00:31:22.487 lat (msec) : 10=10.81%, 20=87.98%, 50=0.69%, 100=0.52% 00:31:22.487 cpu : usr=89.27%, sys=7.89%, ctx=288, majf=0, minf=132 00:31:22.487 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:22.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.487 issued rwts: total=1156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.487 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:22.487 00:31:22.487 Run status group 0 (all jobs): 00:31:22.487 READ: bw=86.6MiB/s (90.8MB/s), 28.2MiB/s-29.9MiB/s (29.6MB/s-31.4MB/s), io=437MiB (458MB), run=5005-5045msec 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.487 bdev_null0 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.487 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.488 [2024-11-15 11:50:02.246767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.488 bdev_null1 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.488 bdev_null2 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:22.488 { 00:31:22.488 "params": { 00:31:22.488 "name": "Nvme$subsystem", 00:31:22.488 "trtype": "$TEST_TRANSPORT", 00:31:22.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.488 "adrfam": "ipv4", 00:31:22.488 "trsvcid": "$NVMF_PORT", 00:31:22.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.488 "hdgst": ${hdgst:-false}, 00:31:22.488 "ddgst": ${ddgst:-false} 00:31:22.488 }, 00:31:22.488 "method": "bdev_nvme_attach_controller" 00:31:22.488 } 00:31:22.488 EOF 00:31:22.488 )") 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:22.488 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:22.489 { 00:31:22.489 "params": { 00:31:22.489 "name": "Nvme$subsystem", 00:31:22.489 "trtype": "$TEST_TRANSPORT", 00:31:22.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.489 "adrfam": "ipv4", 00:31:22.489 "trsvcid": "$NVMF_PORT", 00:31:22.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.489 "hdgst": ${hdgst:-false}, 00:31:22.489 "ddgst": ${ddgst:-false} 00:31:22.489 }, 00:31:22.489 "method": "bdev_nvme_attach_controller" 00:31:22.489 } 00:31:22.489 EOF 00:31:22.489 )") 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:22.489 { 00:31:22.489 "params": { 00:31:22.489 "name": "Nvme$subsystem", 00:31:22.489 "trtype": "$TEST_TRANSPORT", 00:31:22.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.489 "adrfam": "ipv4", 00:31:22.489 "trsvcid": "$NVMF_PORT", 00:31:22.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.489 "hdgst": ${hdgst:-false}, 00:31:22.489 "ddgst": ${ddgst:-false} 00:31:22.489 }, 00:31:22.489 "method": "bdev_nvme_attach_controller" 00:31:22.489 } 00:31:22.489 EOF 00:31:22.489 )") 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:22.489 "params": { 00:31:22.489 "name": "Nvme0", 00:31:22.489 "trtype": "tcp", 00:31:22.489 "traddr": "10.0.0.2", 00:31:22.489 "adrfam": "ipv4", 00:31:22.489 "trsvcid": "4420", 00:31:22.489 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:22.489 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:22.489 "hdgst": false, 00:31:22.489 "ddgst": false 00:31:22.489 }, 00:31:22.489 "method": "bdev_nvme_attach_controller" 00:31:22.489 },{ 00:31:22.489 "params": { 00:31:22.489 "name": "Nvme1", 00:31:22.489 "trtype": "tcp", 00:31:22.489 "traddr": "10.0.0.2", 00:31:22.489 "adrfam": "ipv4", 00:31:22.489 "trsvcid": "4420", 00:31:22.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:22.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:22.489 "hdgst": false, 00:31:22.489 "ddgst": false 00:31:22.489 }, 00:31:22.489 "method": "bdev_nvme_attach_controller" 00:31:22.489 },{ 00:31:22.489 "params": { 00:31:22.489 "name": "Nvme2", 00:31:22.489 "trtype": "tcp", 00:31:22.489 "traddr": "10.0.0.2", 00:31:22.489 "adrfam": "ipv4", 00:31:22.489 "trsvcid": "4420", 00:31:22.489 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:22.489 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:22.489 "hdgst": false, 00:31:22.489 "ddgst": false 00:31:22.489 }, 00:31:22.489 "method": "bdev_nvme_attach_controller" 00:31:22.489 }' 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:22.489 11:50:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:22.489 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:22.489 ... 00:31:22.489 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:22.489 ... 00:31:22.489 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:22.489 ... 00:31:22.489 fio-3.35 00:31:22.489 Starting 24 threads 00:31:34.688 00:31:34.688 filename0: (groupid=0, jobs=1): err= 0: pid=3105364: Fri Nov 15 11:50:13 2024 00:31:34.688 read: IOPS=458, BW=1835KiB/s (1879kB/s)(17.9MiB/10011msec) 00:31:34.688 slat (nsec): min=7985, max=80514, avg=16615.61, stdev=10138.85 00:31:34.688 clat (usec): min=14142, max=46893, avg=34729.28, stdev=3772.70 00:31:34.688 lat (usec): min=14198, max=46910, avg=34745.89, stdev=3771.20 00:31:34.688 clat percentiles (usec): 00:31:34.688 | 1.00th=[29754], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:31:34.688 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:34.688 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:31:34.688 | 99.00th=[45876], 99.50th=[46400], 99.90th=[46924], 99.95th=[46924], 00:31:34.688 | 99.99th=[46924] 00:31:34.688 bw ( KiB/s): min= 1408, max= 2048, per=4.16%, avg=1830.55, stdev=176.76, samples=20 00:31:34.688 iops : min= 352, max= 512, avg=457.60, stdev=44.17, samples=20 00:31:34.688 lat (msec) : 20=0.35%, 50=99.65% 00:31:34.688 cpu : usr=98.05%, sys=1.53%, ctx=18, majf=0, minf=9 00:31:34.688 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:34.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.688 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.688 filename0: (groupid=0, jobs=1): err= 0: pid=3105365: Fri Nov 15 11:50:13 2024 00:31:34.688 read: IOPS=457, BW=1829KiB/s (1872kB/s)(17.9MiB/10010msec) 00:31:34.688 slat (usec): min=8, max=105, avg=29.84, stdev=13.88 00:31:34.688 clat (usec): min=22696, max=46927, avg=34721.87, stdev=3714.58 00:31:34.688 lat (usec): min=22712, max=46976, avg=34751.71, stdev=3713.81 00:31:34.688 clat percentiles (usec): 00:31:34.688 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:31:34.688 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:31:34.688 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:31:34.688 | 99.00th=[45876], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:31:34.688 | 99.99th=[46924] 00:31:34.688 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1818.95, stdev=173.73, samples=19 00:31:34.688 iops : min= 352, max= 480, avg=454.74, stdev=43.43, samples=19 00:31:34.688 lat (msec) : 50=100.00% 00:31:34.688 cpu : usr=98.32%, sys=1.26%, ctx=15, majf=0, minf=9 00:31:34.688 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:34.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.688 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.688 filename0: (groupid=0, jobs=1): err= 0: pid=3105366: Fri Nov 15 11:50:13 2024 00:31:34.688 read: IOPS=459, BW=1839KiB/s (1883kB/s)(18.0MiB/10021msec) 00:31:34.688 slat (usec): min=8, max=101, avg=30.96, stdev=17.23 00:31:34.688 clat (usec): min=16512, max=46952, avg=34548.04, stdev=3997.83 00:31:34.688 lat (usec): min=16565, max=46970, avg=34579.00, stdev=3994.09 00:31:34.688 clat percentiles (usec): 00:31:34.688 | 1.00th=[21103], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:31:34.688 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:31:34.688 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:31:34.688 | 99.00th=[45351], 99.50th=[45876], 99.90th=[46924], 99.95th=[46924], 00:31:34.688 | 99.99th=[46924] 00:31:34.688 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1836.80, stdev=172.61, samples=20 00:31:34.688 iops : min= 352, max= 480, avg=459.20, stdev=43.15, samples=20 00:31:34.688 lat (msec) : 20=1.00%, 50=99.00% 00:31:34.688 cpu : usr=97.44%, sys=1.66%, ctx=177, majf=0, minf=9 00:31:34.688 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:34.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.688 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.688 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.688 filename0: (groupid=0, jobs=1): err= 0: pid=3105367: Fri Nov 15 11:50:13 2024 00:31:34.688 read: IOPS=484, BW=1937KiB/s (1984kB/s)(19.0MiB/10022msec) 00:31:34.688 slat (usec): min=7, max=167, avg=13.82, stdev=10.21 00:31:34.688 clat (usec): min=8756, max=46810, avg=32918.19, stdev=5921.26 00:31:34.688 lat (usec): min=8763, max=46833, avg=32932.00, stdev=5919.71 00:31:34.688 clat percentiles (usec): 00:31:34.688 | 1.00th=[17171], 5.00th=[21890], 10.00th=[23200], 20.00th=[32637], 00:31:34.688 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:31:34.688 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:31:34.688 | 99.00th=[44827], 99.50th=[45876], 99.90th=[46924], 99.95th=[46924], 00:31:34.688 | 99.99th=[46924] 00:31:34.688 bw ( KiB/s): min= 1408, max= 2608, per=4.39%, avg=1935.20, stdev=319.18, samples=20 00:31:34.688 iops : min= 352, max= 652, avg=483.80, stdev=79.79, samples=20 00:31:34.688 lat (msec) : 10=0.14%, 20=1.50%, 50=98.35% 00:31:34.688 cpu : usr=97.96%, sys=1.41%, ctx=58, majf=0, minf=9 00:31:34.688 IO depths : 1=5.0%, 2=10.0%, 4=20.9%, 8=56.3%, 16=7.8%, 32=0.0%, >=64=0.0% 00:31:34.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.688 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.688 issued rwts: total=4854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.688 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.688 filename0: (groupid=0, jobs=1): err= 0: pid=3105368: Fri Nov 15 11:50:13 2024 00:31:34.688 read: IOPS=457, BW=1829KiB/s (1873kB/s)(17.9MiB/10008msec) 00:31:34.688 slat (nsec): min=8831, max=65476, avg=29631.02, stdev=9724.27 00:31:34.688 clat (usec): min=22569, max=49566, avg=34736.96, stdev=3748.92 00:31:34.688 lat (usec): min=22589, max=49601, avg=34766.59, stdev=3749.03 00:31:34.688 clat percentiles (usec): 00:31:34.688 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:31:34.688 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:31:34.688 | 70.00th=[33817], 80.00th=[34866], 90.00th=[42730], 95.00th=[42730], 00:31:34.688 | 99.00th=[45876], 99.50th=[46924], 99.90th=[49546], 99.95th=[49546], 00:31:34.688 | 99.99th=[49546] 00:31:34.688 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1818.95, stdev=178.89, samples=19 00:31:34.688 iops : min= 352, max= 480, avg=454.74, stdev=44.72, samples=19 00:31:34.688 lat (msec) : 50=100.00% 00:31:34.689 cpu : usr=98.16%, sys=1.35%, ctx=70, majf=0, minf=9 00:31:34.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:34.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.689 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.689 filename0: (groupid=0, jobs=1): err= 0: pid=3105369: Fri Nov 15 11:50:13 2024 00:31:34.689 read: IOPS=457, BW=1829KiB/s (1872kB/s)(17.9MiB/10010msec) 00:31:34.689 slat (usec): min=8, max=100, avg=32.16, stdev=19.07 00:31:34.689 clat (usec): min=15921, max=66790, avg=34686.90, stdev=4225.33 00:31:34.689 lat (usec): min=15930, max=66812, avg=34719.05, stdev=4223.28 00:31:34.689 clat percentiles (usec): 00:31:34.689 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:31:34.689 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:31:34.689 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[42730], 00:31:34.689 | 99.00th=[45876], 99.50th=[46400], 99.90th=[66847], 99.95th=[66847], 00:31:34.689 | 99.99th=[66847] 00:31:34.689 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1819.11, stdev=168.25, samples=19 00:31:34.689 iops : min= 352, max= 480, avg=454.74, stdev=42.10, samples=19 00:31:34.689 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:31:34.689 cpu : usr=97.44%, sys=1.60%, ctx=119, majf=0, minf=9 00:31:34.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:34.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.689 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.689 filename0: (groupid=0, jobs=1): err= 0: pid=3105370: Fri Nov 15 11:50:13 2024 00:31:34.689 read: IOPS=457, BW=1829KiB/s (1873kB/s)(17.9MiB/10009msec) 00:31:34.689 slat (usec): min=10, max=106, avg=41.12, stdev=16.82 00:31:34.689 clat (usec): min=18654, max=54192, avg=34631.57, stdev=3893.04 00:31:34.689 lat (usec): min=18686, max=54227, avg=34672.69, stdev=3890.03 00:31:34.689 clat percentiles (usec): 00:31:34.689 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:31:34.689 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:31:34.689 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[42730], 00:31:34.689 | 99.00th=[45876], 99.50th=[46924], 99.90th=[54264], 99.95th=[54264], 00:31:34.689 | 99.99th=[54264] 00:31:34.689 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1818.95, stdev=183.91, samples=19 00:31:34.689 iops : min= 352, max= 480, avg=454.74, stdev=45.98, samples=19 00:31:34.689 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:31:34.689 cpu : usr=97.25%, sys=1.77%, ctx=180, majf=0, minf=9 00:31:34.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:34.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.689 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.689 filename0: (groupid=0, jobs=1): err= 0: pid=3105371: Fri Nov 15 11:50:13 2024 00:31:34.689 read: IOPS=457, BW=1829KiB/s (1873kB/s)(17.9MiB/10006msec) 00:31:34.689 slat (usec): min=18, max=116, avg=75.96, stdev=10.99 00:31:34.689 clat (usec): min=26808, max=46781, avg=34297.57, stdev=3646.49 00:31:34.689 lat (usec): min=26858, max=46855, avg=34373.53, stdev=3646.65 00:31:34.689 clat percentiles (usec): 00:31:34.689 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:31:34.689 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:31:34.689 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42206], 95.00th=[42730], 00:31:34.689 | 99.00th=[44827], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:31:34.689 | 99.99th=[46924] 00:31:34.689 bw ( KiB/s): min= 1408, max= 2048, per=4.14%, avg=1825.68, stdev=185.21, samples=19 00:31:34.689 iops : min= 352, max= 512, avg=456.42, stdev=46.30, samples=19 00:31:34.689 lat (msec) : 50=100.00% 00:31:34.689 cpu : usr=98.22%, sys=1.33%, ctx=13, majf=0, minf=9 00:31:34.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:34.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.689 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.689 filename1: (groupid=0, jobs=1): err= 0: pid=3105372: Fri Nov 15 11:50:13 2024 00:31:34.689 read: IOPS=460, BW=1841KiB/s (1885kB/s)(18.0MiB/10013msec) 00:31:34.689 slat (usec): min=7, max=141, avg=34.62, stdev=25.95 00:31:34.689 clat (usec): min=14030, max=46800, avg=34467.42, stdev=4108.64 00:31:34.689 lat (usec): min=14038, max=46818, avg=34502.04, stdev=4100.40 00:31:34.689 clat percentiles (usec): 00:31:34.689 | 1.00th=[17957], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:31:34.689 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:31:34.689 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:31:34.689 | 99.00th=[45876], 99.50th=[46400], 99.90th=[46924], 99.95th=[46924], 00:31:34.689 | 99.99th=[46924] 00:31:34.689 bw ( KiB/s): min= 1408, max= 2048, per=4.17%, avg=1836.80, stdev=182.32, samples=20 00:31:34.689 iops : min= 352, max= 512, avg=459.20, stdev=45.58, samples=20 00:31:34.689 lat (msec) : 20=1.04%, 50=98.96% 00:31:34.689 cpu : usr=98.25%, sys=1.34%, ctx=15, majf=0, minf=9 00:31:34.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:34.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.689 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.689 filename1: (groupid=0, jobs=1): err= 0: pid=3105373: Fri Nov 15 11:50:13 2024 00:31:34.689 read: IOPS=459, BW=1839KiB/s (1883kB/s)(18.0MiB/10025msec) 00:31:34.689 slat (nsec): min=8325, max=58669, avg=21714.07, stdev=10621.74 00:31:34.689 clat (usec): min=16013, max=46629, avg=34631.24, stdev=3870.52 00:31:34.689 lat (usec): min=16037, max=46653, avg=34652.95, stdev=3869.88 00:31:34.689 clat percentiles (usec): 00:31:34.689 | 1.00th=[26870], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:31:34.689 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:34.689 | 70.00th=[33817], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:31:34.689 | 99.00th=[45351], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:31:34.689 | 99.99th=[46400] 00:31:34.689 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1836.80, stdev=177.53, samples=20 00:31:34.689 iops : min= 352, max= 480, avg=459.20, stdev=44.38, samples=20 00:31:34.689 lat (msec) : 20=0.35%, 50=99.65% 00:31:34.689 cpu : usr=97.42%, sys=1.73%, ctx=141, majf=0, minf=9 00:31:34.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:34.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.689 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.689 filename1: (groupid=0, jobs=1): err= 0: pid=3105374: Fri Nov 15 11:50:13 2024 00:31:34.689 read: IOPS=457, BW=1828KiB/s (1872kB/s)(17.9MiB/10012msec) 00:31:34.689 slat (nsec): min=6156, max=96934, avg=32939.11, stdev=12939.28 00:31:34.689 clat (usec): min=31282, max=46900, avg=34730.85, stdev=3632.55 00:31:34.689 lat (usec): min=31327, max=46920, avg=34763.78, stdev=3631.53 00:31:34.689 clat percentiles (usec): 00:31:34.689 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32637], 00:31:34.689 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:31:34.689 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:31:34.689 | 99.00th=[45351], 99.50th=[45876], 99.90th=[46924], 99.95th=[46924], 00:31:34.689 | 99.99th=[46924] 00:31:34.689 bw ( KiB/s): min= 1408, max= 2048, per=4.13%, avg=1818.95, stdev=178.89, samples=19 00:31:34.689 iops : min= 352, max= 512, avg=454.74, stdev=44.72, samples=19 00:31:34.689 lat (msec) : 50=100.00% 00:31:34.689 cpu : usr=98.30%, sys=1.28%, ctx=19, majf=0, minf=9 00:31:34.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:34.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.689 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.689 filename1: (groupid=0, jobs=1): err= 0: pid=3105375: Fri Nov 15 11:50:13 2024 00:31:34.689 read: IOPS=458, BW=1833KiB/s (1877kB/s)(17.9MiB/10019msec) 00:31:34.689 slat (nsec): min=8738, max=99560, avg=30304.07, stdev=15407.18 00:31:34.690 clat (usec): min=19843, max=59286, avg=34647.14, stdev=3803.40 00:31:34.690 lat (usec): min=19858, max=59301, avg=34677.45, stdev=3801.89 00:31:34.690 clat percentiles (usec): 00:31:34.690 | 1.00th=[31851], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:31:34.690 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:31:34.690 | 70.00th=[33817], 80.00th=[34866], 90.00th=[42730], 95.00th=[42730], 00:31:34.690 | 99.00th=[45351], 99.50th=[45876], 99.90th=[46924], 99.95th=[46924], 00:31:34.690 | 99.99th=[59507] 00:31:34.690 bw ( KiB/s): min= 1408, max= 1936, per=4.16%, avg=1830.40, stdev=171.81, samples=20 00:31:34.690 iops : min= 352, max= 484, avg=457.60, stdev=42.95, samples=20 00:31:34.690 lat (msec) : 20=0.07%, 50=99.89%, 100=0.04% 00:31:34.690 cpu : usr=97.73%, sys=1.52%, ctx=91, majf=0, minf=9 00:31:34.690 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:34.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.690 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.690 filename1: (groupid=0, jobs=1): err= 0: pid=3105376: Fri Nov 15 11:50:13 2024 00:31:34.690 read: IOPS=457, BW=1829KiB/s (1873kB/s)(17.9MiB/10008msec) 00:31:34.690 slat (nsec): min=8193, max=79675, avg=33241.11, stdev=12245.12 00:31:34.690 clat (usec): min=18702, max=66456, avg=34701.68, stdev=3904.71 00:31:34.690 lat (usec): min=18736, max=66477, avg=34734.92, stdev=3904.67 00:31:34.690 clat percentiles (usec): 00:31:34.690 | 1.00th=[32375], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:31:34.690 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:31:34.690 | 70.00th=[33817], 80.00th=[34866], 90.00th=[42730], 95.00th=[42730], 00:31:34.690 | 99.00th=[45876], 99.50th=[46924], 99.90th=[53740], 99.95th=[53740], 00:31:34.690 | 99.99th=[66323] 00:31:34.690 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1818.95, stdev=183.91, samples=19 00:31:34.690 iops : min= 352, max= 480, avg=454.74, stdev=45.98, samples=19 00:31:34.690 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:31:34.690 cpu : usr=98.52%, sys=1.08%, ctx=9, majf=0, minf=9 00:31:34.690 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:34.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.690 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.690 filename1: (groupid=0, jobs=1): err= 0: pid=3105377: Fri Nov 15 11:50:13 2024 00:31:34.690 read: IOPS=457, BW=1829KiB/s (1872kB/s)(17.9MiB/10010msec) 00:31:34.690 slat (usec): min=9, max=103, avg=35.70, stdev=20.23 00:31:34.690 clat (usec): min=15915, max=78690, avg=34673.63, stdev=4288.92 00:31:34.690 lat (usec): min=15929, max=78728, avg=34709.33, stdev=4285.38 00:31:34.690 clat percentiles (usec): 00:31:34.690 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:31:34.690 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:31:34.690 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[42730], 00:31:34.690 | 99.00th=[45876], 99.50th=[46400], 99.90th=[66847], 99.95th=[66847], 00:31:34.690 | 99.99th=[79168] 00:31:34.690 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1819.11, stdev=168.25, samples=19 00:31:34.690 iops : min= 352, max= 480, avg=454.74, stdev=42.10, samples=19 00:31:34.690 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:31:34.690 cpu : usr=97.53%, sys=1.76%, ctx=124, majf=0, minf=9 00:31:34.690 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:34.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.690 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.690 filename1: (groupid=0, jobs=1): err= 0: pid=3105378: Fri Nov 15 11:50:13 2024 00:31:34.690 read: IOPS=457, BW=1829KiB/s (1872kB/s)(17.9MiB/10010msec) 00:31:34.690 slat (usec): min=9, max=473, avg=40.29, stdev=18.40 00:31:34.690 clat (usec): min=18732, max=53819, avg=34625.81, stdev=3889.37 00:31:34.690 lat (usec): min=18769, max=53842, avg=34666.10, stdev=3887.17 00:31:34.690 clat percentiles (usec): 00:31:34.690 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:31:34.690 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:31:34.690 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[42730], 00:31:34.690 | 99.00th=[45351], 99.50th=[46924], 99.90th=[53740], 99.95th=[53740], 00:31:34.690 | 99.99th=[53740] 00:31:34.690 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1819.11, stdev=183.77, samples=19 00:31:34.690 iops : min= 352, max= 480, avg=454.74, stdev=45.98, samples=19 00:31:34.690 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:31:34.690 cpu : usr=98.31%, sys=1.27%, ctx=13, majf=0, minf=9 00:31:34.690 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:34.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.690 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.690 filename1: (groupid=0, jobs=1): err= 0: pid=3105379: Fri Nov 15 11:50:13 2024 00:31:34.690 read: IOPS=459, BW=1839KiB/s (1883kB/s)(18.0MiB/10025msec) 00:31:34.690 slat (nsec): min=7947, max=55975, avg=11613.57, stdev=5090.06 00:31:34.690 clat (usec): min=15959, max=46786, avg=34699.04, stdev=3866.25 00:31:34.690 lat (usec): min=15982, max=46804, avg=34710.65, stdev=3865.64 00:31:34.690 clat percentiles (usec): 00:31:34.690 | 1.00th=[26870], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:31:34.690 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:34.690 | 70.00th=[33817], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:31:34.690 | 99.00th=[45351], 99.50th=[45876], 99.90th=[46924], 99.95th=[46924], 00:31:34.690 | 99.99th=[46924] 00:31:34.690 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1836.80, stdev=177.53, samples=20 00:31:34.690 iops : min= 352, max= 480, avg=459.20, stdev=44.38, samples=20 00:31:34.690 lat (msec) : 20=0.35%, 50=99.65% 00:31:34.690 cpu : usr=98.39%, sys=1.20%, ctx=15, majf=0, minf=9 00:31:34.690 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:34.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.690 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.690 filename2: (groupid=0, jobs=1): err= 0: pid=3105380: Fri Nov 15 11:50:13 2024 00:31:34.690 read: IOPS=460, BW=1841KiB/s (1885kB/s)(18.0MiB/10013msec) 00:31:34.690 slat (usec): min=7, max=163, avg=28.97, stdev=23.08 00:31:34.690 clat (usec): min=10466, max=46838, avg=34515.30, stdev=4105.82 00:31:34.690 lat (usec): min=10512, max=46854, avg=34544.27, stdev=4099.59 00:31:34.690 clat percentiles (usec): 00:31:34.690 | 1.00th=[23462], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:31:34.690 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:31:34.690 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:31:34.690 | 99.00th=[45876], 99.50th=[46400], 99.90th=[46924], 99.95th=[46924], 00:31:34.690 | 99.99th=[46924] 00:31:34.690 bw ( KiB/s): min= 1408, max= 2048, per=4.17%, avg=1836.80, stdev=182.32, samples=20 00:31:34.690 iops : min= 352, max= 512, avg=459.20, stdev=45.58, samples=20 00:31:34.690 lat (msec) : 20=1.00%, 50=99.00% 00:31:34.690 cpu : usr=98.09%, sys=1.50%, ctx=19, majf=0, minf=9 00:31:34.690 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:34.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.690 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.690 filename2: (groupid=0, jobs=1): err= 0: pid=3105381: Fri Nov 15 11:50:13 2024 00:31:34.690 read: IOPS=458, BW=1835KiB/s (1879kB/s)(17.9MiB/10012msec) 00:31:34.690 slat (usec): min=5, max=108, avg=42.23, stdev=18.74 00:31:34.690 clat (usec): min=18703, max=64679, avg=34517.08, stdev=4085.67 00:31:34.690 lat (usec): min=18714, max=64695, avg=34559.31, stdev=4082.02 00:31:34.690 clat percentiles (usec): 00:31:34.690 | 1.00th=[23462], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:31:34.690 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:31:34.690 | 70.00th=[33424], 80.00th=[34866], 90.00th=[42730], 95.00th=[42730], 00:31:34.690 | 99.00th=[45876], 99.50th=[46400], 99.90th=[48497], 99.95th=[48497], 00:31:34.690 | 99.99th=[64750] 00:31:34.690 bw ( KiB/s): min= 1408, max= 2048, per=4.14%, avg=1825.84, stdev=189.92, samples=19 00:31:34.690 iops : min= 352, max= 512, avg=456.42, stdev=47.51, samples=19 00:31:34.690 lat (msec) : 20=0.35%, 50=99.61%, 100=0.04% 00:31:34.690 cpu : usr=98.58%, sys=1.01%, ctx=14, majf=0, minf=9 00:31:34.691 IO depths : 1=5.3%, 2=11.3%, 4=24.2%, 8=51.9%, 16=7.3%, 32=0.0%, >=64=0.0% 00:31:34.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.691 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.691 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.691 filename2: (groupid=0, jobs=1): err= 0: pid=3105382: Fri Nov 15 11:50:13 2024 00:31:34.691 read: IOPS=457, BW=1829KiB/s (1872kB/s)(17.9MiB/10010msec) 00:31:34.691 slat (usec): min=10, max=103, avg=39.39, stdev=21.15 00:31:34.691 clat (usec): min=15928, max=78625, avg=34649.62, stdev=4299.46 00:31:34.691 lat (usec): min=15959, max=78659, avg=34689.01, stdev=4294.73 00:31:34.691 clat percentiles (usec): 00:31:34.691 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:31:34.691 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:31:34.691 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42730], 95.00th=[42730], 00:31:34.691 | 99.00th=[45876], 99.50th=[46400], 99.90th=[66847], 99.95th=[66847], 00:31:34.691 | 99.99th=[78119] 00:31:34.691 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1819.11, stdev=167.43, samples=19 00:31:34.691 iops : min= 352, max= 480, avg=454.74, stdev=41.85, samples=19 00:31:34.691 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:31:34.691 cpu : usr=98.46%, sys=1.12%, ctx=13, majf=0, minf=9 00:31:34.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:34.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.691 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.691 filename2: (groupid=0, jobs=1): err= 0: pid=3105383: Fri Nov 15 11:50:13 2024 00:31:34.691 read: IOPS=459, BW=1839KiB/s (1883kB/s)(18.0MiB/10021msec) 00:31:34.691 slat (usec): min=10, max=110, avg=73.53, stdev=10.17 00:31:34.691 clat (usec): min=16391, max=46817, avg=34137.07, stdev=3967.32 00:31:34.691 lat (usec): min=16444, max=46915, avg=34210.60, stdev=3968.95 00:31:34.691 clat percentiles (usec): 00:31:34.691 | 1.00th=[17957], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:31:34.691 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:31:34.691 | 70.00th=[33424], 80.00th=[34341], 90.00th=[42206], 95.00th=[42730], 00:31:34.691 | 99.00th=[44827], 99.50th=[45351], 99.90th=[46400], 99.95th=[46400], 00:31:34.691 | 99.99th=[46924] 00:31:34.691 bw ( KiB/s): min= 1408, max= 1920, per=4.17%, avg=1836.80, stdev=172.61, samples=20 00:31:34.691 iops : min= 352, max= 480, avg=459.20, stdev=43.15, samples=20 00:31:34.691 lat (msec) : 20=1.04%, 50=98.96% 00:31:34.691 cpu : usr=98.32%, sys=1.21%, ctx=14, majf=0, minf=9 00:31:34.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:34.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.691 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.691 filename2: (groupid=0, jobs=1): err= 0: pid=3105384: Fri Nov 15 11:50:13 2024 00:31:34.691 read: IOPS=457, BW=1829KiB/s (1872kB/s)(17.9MiB/10010msec) 00:31:34.691 slat (nsec): min=8900, max=83570, avg=33192.71, stdev=11011.35 00:31:34.691 clat (usec): min=18752, max=66549, avg=34707.53, stdev=3905.99 00:31:34.691 lat (usec): min=18786, max=66571, avg=34740.72, stdev=3905.70 00:31:34.691 clat percentiles (usec): 00:31:34.691 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32637], 00:31:34.691 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:31:34.691 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[42730], 00:31:34.691 | 99.00th=[45876], 99.50th=[46924], 99.90th=[53740], 99.95th=[53740], 00:31:34.691 | 99.99th=[66323] 00:31:34.691 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1818.95, stdev=183.91, samples=19 00:31:34.691 iops : min= 352, max= 480, avg=454.74, stdev=45.98, samples=19 00:31:34.691 lat (msec) : 20=0.31%, 50=99.34%, 100=0.35% 00:31:34.691 cpu : usr=97.51%, sys=1.56%, ctx=117, majf=0, minf=9 00:31:34.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:34.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.691 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.691 filename2: (groupid=0, jobs=1): err= 0: pid=3105385: Fri Nov 15 11:50:13 2024 00:31:34.691 read: IOPS=457, BW=1829KiB/s (1872kB/s)(17.9MiB/10010msec) 00:31:34.691 slat (nsec): min=9732, max=97223, avg=36921.18, stdev=12679.91 00:31:34.691 clat (usec): min=18715, max=55418, avg=34665.97, stdev=3894.33 00:31:34.691 lat (usec): min=18737, max=55451, avg=34702.89, stdev=3893.66 00:31:34.691 clat percentiles (usec): 00:31:34.691 | 1.00th=[32375], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:31:34.691 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:31:34.691 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[42730], 00:31:34.691 | 99.00th=[45876], 99.50th=[46924], 99.90th=[55313], 99.95th=[55313], 00:31:34.691 | 99.99th=[55313] 00:31:34.691 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1819.11, stdev=183.77, samples=19 00:31:34.691 iops : min= 352, max= 480, avg=454.74, stdev=45.98, samples=19 00:31:34.691 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:31:34.691 cpu : usr=97.54%, sys=1.61%, ctx=134, majf=0, minf=10 00:31:34.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:34.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.691 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.691 filename2: (groupid=0, jobs=1): err= 0: pid=3105386: Fri Nov 15 11:50:13 2024 00:31:34.691 read: IOPS=455, BW=1823KiB/s (1866kB/s)(17.8MiB/10008msec) 00:31:34.691 slat (usec): min=8, max=102, avg=31.00, stdev=13.20 00:31:34.691 clat (usec): min=22568, max=79068, avg=34799.55, stdev=4488.61 00:31:34.691 lat (usec): min=22579, max=79100, avg=34830.54, stdev=4488.37 00:31:34.691 clat percentiles (usec): 00:31:34.691 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32637], 00:31:34.691 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33424], 00:31:34.691 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[42730], 00:31:34.691 | 99.00th=[45876], 99.50th=[46400], 99.90th=[79168], 99.95th=[79168], 00:31:34.691 | 99.99th=[79168] 00:31:34.691 bw ( KiB/s): min= 1408, max= 2048, per=4.12%, avg=1812.37, stdev=182.20, samples=19 00:31:34.691 iops : min= 352, max= 512, avg=453.05, stdev=45.58, samples=19 00:31:34.691 lat (msec) : 50=99.65%, 100=0.35% 00:31:34.691 cpu : usr=95.97%, sys=2.52%, ctx=415, majf=0, minf=9 00:31:34.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:34.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.691 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.691 filename2: (groupid=0, jobs=1): err= 0: pid=3105387: Fri Nov 15 11:50:13 2024 00:31:34.691 read: IOPS=460, BW=1841KiB/s (1885kB/s)(18.0MiB/10013msec) 00:31:34.691 slat (nsec): min=7561, max=67980, avg=14295.15, stdev=7643.07 00:31:34.691 clat (usec): min=13110, max=46844, avg=34634.99, stdev=4056.17 00:31:34.691 lat (usec): min=13118, max=46871, avg=34649.28, stdev=4054.37 00:31:34.691 clat percentiles (usec): 00:31:34.691 | 1.00th=[18744], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:31:34.691 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:34.691 | 70.00th=[33817], 80.00th=[34866], 90.00th=[42730], 95.00th=[43254], 00:31:34.691 | 99.00th=[45876], 99.50th=[46400], 99.90th=[46924], 99.95th=[46924], 00:31:34.691 | 99.99th=[46924] 00:31:34.691 bw ( KiB/s): min= 1408, max= 2048, per=4.17%, avg=1836.80, stdev=182.32, samples=20 00:31:34.691 iops : min= 352, max= 512, avg=459.20, stdev=45.58, samples=20 00:31:34.691 lat (msec) : 20=1.04%, 50=98.96% 00:31:34.691 cpu : usr=98.36%, sys=1.23%, ctx=16, majf=0, minf=9 00:31:34.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:34.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.691 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:34.691 00:31:34.691 Run status group 0 (all jobs): 00:31:34.691 READ: bw=43.0MiB/s (45.1MB/s), 1823KiB/s-1937KiB/s (1866kB/s-1984kB/s), io=431MiB (452MB), run=10006-10025msec 00:31:34.691 11:50:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:34.691 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:34.691 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:34.691 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:34.691 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:34.691 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.692 bdev_null0 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.692 [2024-11-15 11:50:14.079473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.692 bdev_null1 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:34.692 { 00:31:34.692 "params": { 00:31:34.692 "name": "Nvme$subsystem", 00:31:34.692 "trtype": "$TEST_TRANSPORT", 00:31:34.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:34.692 "adrfam": "ipv4", 00:31:34.692 "trsvcid": "$NVMF_PORT", 00:31:34.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:34.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:34.692 "hdgst": ${hdgst:-false}, 00:31:34.692 "ddgst": ${ddgst:-false} 00:31:34.692 }, 00:31:34.692 "method": "bdev_nvme_attach_controller" 00:31:34.692 } 00:31:34.692 EOF 00:31:34.692 )") 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:34.692 11:50:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:34.693 { 00:31:34.693 "params": { 00:31:34.693 "name": "Nvme$subsystem", 00:31:34.693 "trtype": "$TEST_TRANSPORT", 00:31:34.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:34.693 "adrfam": "ipv4", 00:31:34.693 "trsvcid": "$NVMF_PORT", 00:31:34.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:34.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:34.693 "hdgst": ${hdgst:-false}, 00:31:34.693 "ddgst": ${ddgst:-false} 00:31:34.693 }, 00:31:34.693 "method": "bdev_nvme_attach_controller" 00:31:34.693 } 00:31:34.693 EOF 00:31:34.693 )") 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:34.693 "params": { 00:31:34.693 "name": "Nvme0", 00:31:34.693 "trtype": "tcp", 00:31:34.693 "traddr": "10.0.0.2", 00:31:34.693 "adrfam": "ipv4", 00:31:34.693 "trsvcid": "4420", 00:31:34.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:34.693 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:34.693 "hdgst": false, 00:31:34.693 "ddgst": false 00:31:34.693 }, 00:31:34.693 "method": "bdev_nvme_attach_controller" 00:31:34.693 },{ 00:31:34.693 "params": { 00:31:34.693 "name": "Nvme1", 00:31:34.693 "trtype": "tcp", 00:31:34.693 "traddr": "10.0.0.2", 00:31:34.693 "adrfam": "ipv4", 00:31:34.693 "trsvcid": "4420", 00:31:34.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:34.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:34.693 "hdgst": false, 00:31:34.693 "ddgst": false 00:31:34.693 }, 00:31:34.693 "method": "bdev_nvme_attach_controller" 00:31:34.693 }' 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:34.693 11:50:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:34.693 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:34.693 ... 00:31:34.693 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:34.693 ... 00:31:34.693 fio-3.35 00:31:34.693 Starting 4 threads 00:31:39.956 00:31:39.956 filename0: (groupid=0, jobs=1): err= 0: pid=3106769: Fri Nov 15 11:50:20 2024 00:31:39.956 read: IOPS=1959, BW=15.3MiB/s (16.1MB/s)(76.6MiB/5004msec) 00:31:39.956 slat (nsec): min=4291, max=57350, avg=15943.76, stdev=4627.99 00:31:39.956 clat (usec): min=641, max=7370, avg=4016.81, stdev=335.04 00:31:39.956 lat (usec): min=654, max=7385, avg=4032.75, stdev=335.32 00:31:39.956 clat percentiles (usec): 00:31:39.956 | 1.00th=[ 3064], 5.00th=[ 3818], 10.00th=[ 3916], 20.00th=[ 3949], 00:31:39.956 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4015], 60.00th=[ 4047], 00:31:39.956 | 70.00th=[ 4080], 80.00th=[ 4080], 90.00th=[ 4146], 95.00th=[ 4178], 00:31:39.956 | 99.00th=[ 4948], 99.50th=[ 5866], 99.90th=[ 6980], 99.95th=[ 7242], 00:31:39.956 | 99.99th=[ 7373] 00:31:39.956 bw ( KiB/s): min=15488, max=15856, per=25.09%, avg=15678.40, stdev=103.82, samples=10 00:31:39.956 iops : min= 1936, max= 1982, avg=1959.80, stdev=12.98, samples=10 00:31:39.956 lat (usec) : 750=0.02%, 1000=0.09% 00:31:39.956 lat (msec) : 2=0.41%, 4=37.81%, 10=61.67% 00:31:39.956 cpu : usr=93.12%, sys=5.44%, ctx=153, majf=0, minf=0 00:31:39.956 IO depths : 1=1.2%, 2=23.9%, 4=50.7%, 8=24.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.956 complete : 0=0.0%, 4=90.2%, 8=9.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.956 issued rwts: total=9807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.956 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:39.956 filename0: (groupid=0, jobs=1): err= 0: pid=3106770: Fri Nov 15 11:50:20 2024 00:31:39.956 read: IOPS=1950, BW=15.2MiB/s (16.0MB/s)(76.2MiB/5002msec) 00:31:39.956 slat (nsec): min=4018, max=36591, avg=15050.40, stdev=4082.71 00:31:39.956 clat (usec): min=871, max=7392, avg=4041.35, stdev=359.82 00:31:39.956 lat (usec): min=890, max=7407, avg=4056.40, stdev=359.69 00:31:39.956 clat percentiles (usec): 00:31:39.956 | 1.00th=[ 3097], 5.00th=[ 3851], 10.00th=[ 3916], 20.00th=[ 3982], 00:31:39.956 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4047], 00:31:39.956 | 70.00th=[ 4080], 80.00th=[ 4080], 90.00th=[ 4146], 95.00th=[ 4228], 00:31:39.956 | 99.00th=[ 5800], 99.50th=[ 6325], 99.90th=[ 6915], 99.95th=[ 7177], 00:31:39.956 | 99.99th=[ 7373] 00:31:39.956 bw ( KiB/s): min=15456, max=15744, per=24.97%, avg=15600.00, stdev=100.88, samples=9 00:31:39.956 iops : min= 1932, max= 1968, avg=1950.00, stdev=12.61, samples=9 00:31:39.956 lat (usec) : 1000=0.06% 00:31:39.956 lat (msec) : 2=0.44%, 4=33.22%, 10=66.27% 00:31:39.956 cpu : usr=94.24%, sys=5.18%, ctx=50, majf=0, minf=9 00:31:39.956 IO depths : 1=0.7%, 2=22.9%, 4=51.7%, 8=24.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.956 complete : 0=0.0%, 4=90.3%, 8=9.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.956 issued rwts: total=9758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.956 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:39.956 filename1: (groupid=0, jobs=1): err= 0: pid=3106771: Fri Nov 15 11:50:20 2024 00:31:39.956 read: IOPS=1943, BW=15.2MiB/s (15.9MB/s)(76.0MiB/5003msec) 00:31:39.956 slat (nsec): min=4064, max=32624, avg=14757.32, stdev=3774.30 00:31:39.956 clat (usec): min=768, max=7361, avg=4056.14, stdev=434.16 00:31:39.956 lat (usec): min=780, max=7384, avg=4070.89, stdev=434.06 00:31:39.956 clat percentiles (usec): 00:31:39.956 | 1.00th=[ 2966], 5.00th=[ 3785], 10.00th=[ 3916], 20.00th=[ 3982], 00:31:39.956 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4015], 60.00th=[ 4047], 00:31:39.956 | 70.00th=[ 4080], 80.00th=[ 4113], 90.00th=[ 4146], 95.00th=[ 4293], 00:31:39.956 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7177], 99.95th=[ 7308], 00:31:39.956 | 99.99th=[ 7373] 00:31:39.956 bw ( KiB/s): min=15216, max=15744, per=24.89%, avg=15550.30, stdev=146.17, samples=10 00:31:39.956 iops : min= 1902, max= 1968, avg=1943.70, stdev=18.29, samples=10 00:31:39.956 lat (usec) : 1000=0.09% 00:31:39.956 lat (msec) : 2=0.48%, 4=34.06%, 10=65.37% 00:31:39.956 cpu : usr=94.56%, sys=5.00%, ctx=10, majf=0, minf=9 00:31:39.956 IO depths : 1=1.0%, 2=22.9%, 4=51.5%, 8=24.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.956 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.956 issued rwts: total=9725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.956 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:39.956 filename1: (groupid=0, jobs=1): err= 0: pid=3106772: Fri Nov 15 11:50:20 2024 00:31:39.956 read: IOPS=1958, BW=15.3MiB/s (16.0MB/s)(76.5MiB/5002msec) 00:31:39.956 slat (nsec): min=3874, max=35890, avg=12317.67, stdev=4520.53 00:31:39.956 clat (usec): min=1343, max=8221, avg=4045.15, stdev=225.50 00:31:39.956 lat (usec): min=1356, max=8232, avg=4057.47, stdev=225.59 00:31:39.956 clat percentiles (usec): 00:31:39.956 | 1.00th=[ 3425], 5.00th=[ 3785], 10.00th=[ 3916], 20.00th=[ 3982], 00:31:39.956 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4047], 60.00th=[ 4080], 00:31:39.956 | 70.00th=[ 4080], 80.00th=[ 4113], 90.00th=[ 4178], 95.00th=[ 4228], 00:31:39.956 | 99.00th=[ 4621], 99.50th=[ 5014], 99.90th=[ 6587], 99.95th=[ 7308], 00:31:39.956 | 99.99th=[ 8225] 00:31:39.956 bw ( KiB/s): min=15360, max=16048, per=25.07%, avg=15662.22, stdev=182.68, samples=9 00:31:39.956 iops : min= 1920, max= 2006, avg=1957.78, stdev=22.84, samples=9 00:31:39.956 lat (msec) : 2=0.03%, 4=24.86%, 10=75.11% 00:31:39.956 cpu : usr=94.48%, sys=5.06%, ctx=8, majf=0, minf=9 00:31:39.956 IO depths : 1=0.2%, 2=11.1%, 4=61.8%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.956 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.956 issued rwts: total=9794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.956 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:39.956 00:31:39.956 Run status group 0 (all jobs): 00:31:39.956 READ: bw=61.0MiB/s (64.0MB/s), 15.2MiB/s-15.3MiB/s (15.9MB/s-16.1MB/s), io=305MiB (320MB), run=5002-5004msec 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.956 00:31:39.956 real 0m24.275s 00:31:39.956 user 4m32.854s 00:31:39.956 sys 0m6.477s 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:39.956 11:50:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.956 ************************************ 00:31:39.956 END TEST fio_dif_rand_params 00:31:39.956 ************************************ 00:31:39.956 11:50:20 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:39.956 11:50:20 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:39.956 11:50:20 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:39.956 11:50:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:39.956 ************************************ 00:31:39.956 START TEST fio_dif_digest 00:31:39.956 ************************************ 00:31:39.956 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:31:39.956 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:39.956 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:39.956 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:39.956 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:39.956 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:39.956 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:39.956 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:39.956 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:39.956 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:39.956 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:39.956 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:39.956 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:39.956 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:39.956 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:39.956 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:39.957 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:39.957 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.957 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:39.957 bdev_null0 00:31:39.957 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.957 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:39.957 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.957 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:40.215 [2024-11-15 11:50:20.392784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.215 { 00:31:40.215 "params": { 00:31:40.215 "name": "Nvme$subsystem", 00:31:40.215 "trtype": "$TEST_TRANSPORT", 00:31:40.215 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.215 "adrfam": "ipv4", 00:31:40.215 "trsvcid": "$NVMF_PORT", 00:31:40.215 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.215 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.215 "hdgst": ${hdgst:-false}, 00:31:40.215 "ddgst": ${ddgst:-false} 00:31:40.215 }, 00:31:40.215 "method": "bdev_nvme_attach_controller" 00:31:40.215 } 00:31:40.215 EOF 00:31:40.215 )") 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:40.215 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:40.216 "params": { 00:31:40.216 "name": "Nvme0", 00:31:40.216 "trtype": "tcp", 00:31:40.216 "traddr": "10.0.0.2", 00:31:40.216 "adrfam": "ipv4", 00:31:40.216 "trsvcid": "4420", 00:31:40.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:40.216 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:40.216 "hdgst": true, 00:31:40.216 "ddgst": true 00:31:40.216 }, 00:31:40.216 "method": "bdev_nvme_attach_controller" 00:31:40.216 }' 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:40.216 11:50:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.474 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:40.474 ... 00:31:40.474 fio-3.35 00:31:40.474 Starting 3 threads 00:31:52.666 00:31:52.666 filename0: (groupid=0, jobs=1): err= 0: pid=3107522: Fri Nov 15 11:50:31 2024 00:31:52.666 read: IOPS=208, BW=26.1MiB/s (27.3MB/s)(262MiB/10043msec) 00:31:52.666 slat (nsec): min=4254, max=29475, avg=14051.93, stdev=1318.25 00:31:52.666 clat (usec): min=11129, max=52705, avg=14344.50, stdev=1473.62 00:31:52.666 lat (usec): min=11142, max=52719, avg=14358.55, stdev=1473.63 00:31:52.666 clat percentiles (usec): 00:31:52.666 | 1.00th=[12125], 5.00th=[12780], 10.00th=[13173], 20.00th=[13566], 00:31:52.666 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14484], 00:31:52.666 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15401], 95.00th=[15926], 00:31:52.666 | 99.00th=[16712], 99.50th=[17171], 99.90th=[18744], 99.95th=[49546], 00:31:52.666 | 99.99th=[52691] 00:31:52.666 bw ( KiB/s): min=26112, max=27648, per=33.47%, avg=26790.40, stdev=425.74, samples=20 00:31:52.666 iops : min= 204, max= 216, avg=209.30, stdev= 3.33, samples=20 00:31:52.666 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:31:52.666 cpu : usr=93.59%, sys=5.93%, ctx=15, majf=0, minf=148 00:31:52.666 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.666 issued rwts: total=2095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.666 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:52.666 filename0: (groupid=0, jobs=1): err= 0: pid=3107523: Fri Nov 15 11:50:31 2024 00:31:52.666 read: IOPS=205, BW=25.7MiB/s (27.0MB/s)(258MiB/10044msec) 00:31:52.666 slat (nsec): min=4258, max=30497, avg=15900.22, stdev=2229.20 00:31:52.666 clat (usec): min=11327, max=50571, avg=14537.96, stdev=1393.29 00:31:52.666 lat (usec): min=11344, max=50588, avg=14553.86, stdev=1393.23 00:31:52.666 clat percentiles (usec): 00:31:52.666 | 1.00th=[12518], 5.00th=[13042], 10.00th=[13435], 20.00th=[13829], 00:31:52.666 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:31:52.666 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15664], 95.00th=[16057], 00:31:52.666 | 99.00th=[16909], 99.50th=[17433], 99.90th=[21890], 99.95th=[45351], 00:31:52.666 | 99.99th=[50594] 00:31:52.666 bw ( KiB/s): min=25856, max=26880, per=33.02%, avg=26432.00, stdev=341.19, samples=20 00:31:52.666 iops : min= 202, max= 210, avg=206.50, stdev= 2.67, samples=20 00:31:52.666 lat (msec) : 20=99.81%, 50=0.15%, 100=0.05% 00:31:52.666 cpu : usr=93.12%, sys=6.34%, ctx=16, majf=0, minf=124 00:31:52.666 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.666 issued rwts: total=2067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.666 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:52.666 filename0: (groupid=0, jobs=1): err= 0: pid=3107524: Fri Nov 15 11:50:31 2024 00:31:52.666 read: IOPS=210, BW=26.4MiB/s (27.6MB/s)(265MiB/10045msec) 00:31:52.666 slat (nsec): min=4269, max=28924, avg=14158.53, stdev=1276.38 00:31:52.666 clat (usec): min=11025, max=55073, avg=14183.51, stdev=1571.80 00:31:52.666 lat (usec): min=11040, max=55086, avg=14197.67, stdev=1571.75 00:31:52.666 clat percentiles (usec): 00:31:52.666 | 1.00th=[11994], 5.00th=[12649], 10.00th=[12911], 20.00th=[13435], 00:31:52.666 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:31:52.666 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15401], 95.00th=[15926], 00:31:52.666 | 99.00th=[16909], 99.50th=[17171], 99.90th=[22414], 99.95th=[52167], 00:31:52.666 | 99.99th=[55313] 00:31:52.666 bw ( KiB/s): min=25856, max=27648, per=33.86%, avg=27097.60, stdev=457.00, samples=20 00:31:52.666 iops : min= 202, max= 216, avg=211.70, stdev= 3.57, samples=20 00:31:52.667 lat (msec) : 20=99.76%, 50=0.14%, 100=0.09% 00:31:52.667 cpu : usr=94.12%, sys=5.40%, ctx=15, majf=0, minf=115 00:31:52.667 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:52.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.667 issued rwts: total=2119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.667 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:52.667 00:31:52.667 Run status group 0 (all jobs): 00:31:52.667 READ: bw=78.2MiB/s (82.0MB/s), 25.7MiB/s-26.4MiB/s (27.0MB/s-27.6MB/s), io=785MiB (823MB), run=10043-10045msec 00:31:52.667 11:50:31 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:52.667 11:50:31 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:52.667 11:50:31 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:52.667 11:50:31 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:52.667 11:50:31 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:52.667 11:50:31 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:52.667 11:50:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.667 11:50:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:52.667 11:50:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.667 11:50:31 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:52.667 11:50:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.667 11:50:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:52.667 11:50:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.667 00:31:52.667 real 0m11.106s 00:31:52.667 user 0m29.317s 00:31:52.667 sys 0m2.037s 00:31:52.667 11:50:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.667 11:50:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:52.667 ************************************ 00:31:52.667 END TEST fio_dif_digest 00:31:52.667 ************************************ 00:31:52.667 11:50:31 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:52.667 11:50:31 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:52.667 11:50:31 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:52.667 11:50:31 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:31:52.667 11:50:31 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:52.667 11:50:31 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:31:52.667 11:50:31 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:52.667 11:50:31 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:52.667 rmmod nvme_tcp 00:31:52.667 rmmod nvme_fabrics 00:31:52.667 rmmod nvme_keyring 00:31:52.667 11:50:31 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:52.667 11:50:31 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:31:52.667 11:50:31 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:31:52.667 11:50:31 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3101458 ']' 00:31:52.667 11:50:31 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3101458 00:31:52.667 11:50:31 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3101458 ']' 00:31:52.667 11:50:31 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3101458 00:31:52.667 11:50:31 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:31:52.667 11:50:31 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:52.667 11:50:31 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3101458 00:31:52.667 11:50:31 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:52.667 11:50:31 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:52.667 11:50:31 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3101458' 00:31:52.667 killing process with pid 3101458 00:31:52.667 11:50:31 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3101458 00:31:52.667 11:50:31 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3101458 00:31:52.667 11:50:31 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:31:52.667 11:50:31 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:52.667 Waiting for block devices as requested 00:31:52.667 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:52.667 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:52.667 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:52.926 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:52.926 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:52.926 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:52.926 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:53.185 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:53.185 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:31:53.185 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:53.443 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:53.443 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:53.443 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:53.701 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:53.701 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:53.701 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:53.701 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:53.995 11:50:34 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:53.995 11:50:34 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:53.995 11:50:34 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:31:53.995 11:50:34 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:31:53.995 11:50:34 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:53.995 11:50:34 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:31:53.995 11:50:34 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:53.995 11:50:34 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:53.995 11:50:34 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.995 11:50:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:53.995 11:50:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.927 11:50:36 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:55.927 00:31:55.927 real 1m7.213s 00:31:55.927 user 6m30.424s 00:31:55.927 sys 0m17.756s 00:31:55.927 11:50:36 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.927 11:50:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:55.927 ************************************ 00:31:55.927 END TEST nvmf_dif 00:31:55.927 ************************************ 00:31:55.927 11:50:36 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:55.927 11:50:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:55.927 11:50:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.927 11:50:36 -- common/autotest_common.sh@10 -- # set +x 00:31:55.927 ************************************ 00:31:55.927 START TEST nvmf_abort_qd_sizes 00:31:55.927 ************************************ 00:31:55.927 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:55.927 * Looking for test storage... 00:31:55.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:55.927 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:55.927 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:31:55.927 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:56.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.186 --rc genhtml_branch_coverage=1 00:31:56.186 --rc genhtml_function_coverage=1 00:31:56.186 --rc genhtml_legend=1 00:31:56.186 --rc geninfo_all_blocks=1 00:31:56.186 --rc geninfo_unexecuted_blocks=1 00:31:56.186 00:31:56.186 ' 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:56.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.186 --rc genhtml_branch_coverage=1 00:31:56.186 --rc genhtml_function_coverage=1 00:31:56.186 --rc genhtml_legend=1 00:31:56.186 --rc geninfo_all_blocks=1 00:31:56.186 --rc geninfo_unexecuted_blocks=1 00:31:56.186 00:31:56.186 ' 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:56.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.186 --rc genhtml_branch_coverage=1 00:31:56.186 --rc genhtml_function_coverage=1 00:31:56.186 --rc genhtml_legend=1 00:31:56.186 --rc geninfo_all_blocks=1 00:31:56.186 --rc geninfo_unexecuted_blocks=1 00:31:56.186 00:31:56.186 ' 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:56.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.186 --rc genhtml_branch_coverage=1 00:31:56.186 --rc genhtml_function_coverage=1 00:31:56.186 --rc genhtml_legend=1 00:31:56.186 --rc geninfo_all_blocks=1 00:31:56.186 --rc geninfo_unexecuted_blocks=1 00:31:56.186 00:31:56.186 ' 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:56.186 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:56.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:31:56.187 11:50:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:58.721 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:58.722 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:58.722 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:58.722 Found net devices under 0000:09:00.0: cvl_0_0 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:58.722 Found net devices under 0000:09:00.1: cvl_0_1 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:58.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:58.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:31:58.722 00:31:58.722 --- 10.0.0.2 ping statistics --- 00:31:58.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.722 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:58.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:58.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:31:58.722 00:31:58.722 --- 10.0.0.1 ping statistics --- 00:31:58.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.722 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:31:58.722 11:50:38 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:59.660 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:59.660 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:59.660 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:59.660 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:59.660 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:59.660 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:59.660 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:59.660 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:59.660 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:59.660 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:59.660 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:59.660 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:59.660 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:59.660 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:59.660 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:59.660 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:00.597 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3112448 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3112448 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3112448 ']' 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.855 11:50:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:00.855 [2024-11-15 11:50:41.150064] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:32:00.855 [2024-11-15 11:50:41.150150] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.855 [2024-11-15 11:50:41.222695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:01.113 [2024-11-15 11:50:41.285897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:01.113 [2024-11-15 11:50:41.285946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:01.113 [2024-11-15 11:50:41.285961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:01.113 [2024-11-15 11:50:41.285972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:01.113 [2024-11-15 11:50:41.285982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:01.113 [2024-11-15 11:50:41.287567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.113 [2024-11-15 11:50:41.287621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:01.113 [2024-11-15 11:50:41.287688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:01.113 [2024-11-15 11:50:41.287692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:0b:00.0 ]] 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:0b:00.0 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:01.113 11:50:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:01.113 ************************************ 00:32:01.113 START TEST spdk_target_abort 00:32:01.113 ************************************ 00:32:01.113 11:50:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:32:01.113 11:50:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:01.113 11:50:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:32:01.113 11:50:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.113 11:50:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:04.395 spdk_targetn1 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:04.395 [2024-11-15 11:50:44.325455] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:04.395 [2024-11-15 11:50:44.375669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:04.395 11:50:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:07.676 Initializing NVMe Controllers 00:32:07.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:07.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:07.676 Initialization complete. Launching workers. 00:32:07.676 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12200, failed: 0 00:32:07.676 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1219, failed to submit 10981 00:32:07.676 success 729, unsuccessful 490, failed 0 00:32:07.676 11:50:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:07.676 11:50:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:10.957 Initializing NVMe Controllers 00:32:10.957 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:10.957 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:10.957 Initialization complete. Launching workers. 00:32:10.957 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8851, failed: 0 00:32:10.957 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1212, failed to submit 7639 00:32:10.957 success 341, unsuccessful 871, failed 0 00:32:10.957 11:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:10.957 11:50:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:14.254 Initializing NVMe Controllers 00:32:14.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:14.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:14.254 Initialization complete. Launching workers. 00:32:14.254 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31326, failed: 0 00:32:14.254 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2598, failed to submit 28728 00:32:14.254 success 541, unsuccessful 2057, failed 0 00:32:14.254 11:50:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:14.254 11:50:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.254 11:50:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:14.254 11:50:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.254 11:50:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:14.254 11:50:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.254 11:50:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:15.188 11:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.188 11:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3112448 00:32:15.188 11:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3112448 ']' 00:32:15.188 11:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3112448 00:32:15.188 11:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:32:15.188 11:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:15.188 11:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3112448 00:32:15.188 11:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:15.188 11:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:15.188 11:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3112448' 00:32:15.188 killing process with pid 3112448 00:32:15.188 11:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3112448 00:32:15.188 11:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3112448 00:32:15.446 00:32:15.446 real 0m14.230s 00:32:15.446 user 0m53.536s 00:32:15.446 sys 0m2.858s 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:15.446 ************************************ 00:32:15.446 END TEST spdk_target_abort 00:32:15.446 ************************************ 00:32:15.446 11:50:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:15.446 11:50:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:15.446 11:50:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:15.446 11:50:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:15.446 ************************************ 00:32:15.446 START TEST kernel_target_abort 00:32:15.446 ************************************ 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:15.446 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:15.447 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:32:15.447 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:32:15.447 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:32:15.447 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:15.447 11:50:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:16.821 Waiting for block devices as requested 00:32:16.821 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:16.821 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:16.821 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:16.821 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:16.821 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:16.821 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:17.079 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:17.079 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:17.079 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:32:17.337 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:17.337 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:17.337 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:17.337 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:17.595 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:17.595 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:17.595 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:17.853 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:17.853 No valid GPT data, bailing 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:32:17.853 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:32:17.854 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:32:17.854 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:32:17.854 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:32:17.854 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:32:17.854 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:17.854 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:32:18.112 00:32:18.112 Discovery Log Number of Records 2, Generation counter 2 00:32:18.112 =====Discovery Log Entry 0====== 00:32:18.112 trtype: tcp 00:32:18.112 adrfam: ipv4 00:32:18.112 subtype: current discovery subsystem 00:32:18.112 treq: not specified, sq flow control disable supported 00:32:18.112 portid: 1 00:32:18.112 trsvcid: 4420 00:32:18.112 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:18.112 traddr: 10.0.0.1 00:32:18.112 eflags: none 00:32:18.112 sectype: none 00:32:18.112 =====Discovery Log Entry 1====== 00:32:18.112 trtype: tcp 00:32:18.112 adrfam: ipv4 00:32:18.112 subtype: nvme subsystem 00:32:18.112 treq: not specified, sq flow control disable supported 00:32:18.112 portid: 1 00:32:18.112 trsvcid: 4420 00:32:18.112 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:18.112 traddr: 10.0.0.1 00:32:18.112 eflags: none 00:32:18.112 sectype: none 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:18.112 11:50:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:21.392 Initializing NVMe Controllers 00:32:21.392 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:21.392 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:21.392 Initialization complete. Launching workers. 00:32:21.392 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48369, failed: 0 00:32:21.392 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 48369, failed to submit 0 00:32:21.392 success 0, unsuccessful 48369, failed 0 00:32:21.392 11:51:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:21.392 11:51:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:24.672 Initializing NVMe Controllers 00:32:24.672 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:24.672 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:24.672 Initialization complete. Launching workers. 00:32:24.672 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92815, failed: 0 00:32:24.672 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20882, failed to submit 71933 00:32:24.672 success 0, unsuccessful 20882, failed 0 00:32:24.672 11:51:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:24.672 11:51:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:27.956 Initializing NVMe Controllers 00:32:27.956 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:27.956 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:27.956 Initialization complete. Launching workers. 00:32:27.956 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 86818, failed: 0 00:32:27.956 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21674, failed to submit 65144 00:32:27.956 success 0, unsuccessful 21674, failed 0 00:32:27.956 11:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:27.956 11:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:27.956 11:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:32:27.956 11:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:27.956 11:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:27.956 11:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:27.956 11:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:27.956 11:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:27.956 11:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:32:27.956 11:51:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:28.891 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:28.891 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:28.891 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:28.891 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:28.891 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:28.891 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:28.891 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:28.891 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:28.891 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:28.891 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:28.891 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:28.891 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:28.891 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:28.891 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:28.891 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:28.891 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:29.827 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:32:29.827 00:32:29.827 real 0m14.467s 00:32:29.827 user 0m6.156s 00:32:29.827 sys 0m3.513s 00:32:29.827 11:51:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.827 11:51:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:29.827 ************************************ 00:32:29.827 END TEST kernel_target_abort 00:32:29.827 ************************************ 00:32:29.827 11:51:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:29.827 11:51:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:29.827 11:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:29.827 11:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:32:29.827 11:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:29.827 11:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:32:29.827 11:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:29.827 11:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:29.827 rmmod nvme_tcp 00:32:30.084 rmmod nvme_fabrics 00:32:30.084 rmmod nvme_keyring 00:32:30.084 11:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:30.084 11:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:32:30.084 11:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:32:30.084 11:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3112448 ']' 00:32:30.084 11:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3112448 00:32:30.084 11:51:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3112448 ']' 00:32:30.084 11:51:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3112448 00:32:30.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3112448) - No such process 00:32:30.084 11:51:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3112448 is not found' 00:32:30.084 Process with pid 3112448 is not found 00:32:30.084 11:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:30.084 11:51:10 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:31.047 Waiting for block devices as requested 00:32:31.347 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:31.347 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:31.347 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:31.605 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:31.605 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:31.605 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:31.605 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:31.605 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:31.862 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:32:31.862 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:32.120 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:32.120 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:32.120 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:32.120 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:32.378 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:32.378 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:32.378 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:32.638 11:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:32.638 11:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:32.638 11:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:32:32.638 11:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:32:32.639 11:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:32.639 11:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:32:32.639 11:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:32.639 11:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:32.639 11:51:12 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.639 11:51:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:32.639 11:51:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.546 11:51:14 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:34.546 00:32:34.546 real 0m38.566s 00:32:34.546 user 1m1.995s 00:32:34.546 sys 0m9.994s 00:32:34.546 11:51:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:34.546 11:51:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:34.546 ************************************ 00:32:34.546 END TEST nvmf_abort_qd_sizes 00:32:34.546 ************************************ 00:32:34.546 11:51:14 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:34.546 11:51:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:34.546 11:51:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:34.546 11:51:14 -- common/autotest_common.sh@10 -- # set +x 00:32:34.546 ************************************ 00:32:34.546 START TEST keyring_file 00:32:34.546 ************************************ 00:32:34.546 11:51:14 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:34.805 * Looking for test storage... 00:32:34.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:34.805 11:51:14 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:34.805 11:51:14 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:32:34.805 11:51:14 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:34.805 11:51:15 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@345 -- # : 1 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@353 -- # local d=1 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@355 -- # echo 1 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@353 -- # local d=2 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@355 -- # echo 2 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:34.805 11:51:15 keyring_file -- scripts/common.sh@368 -- # return 0 00:32:34.805 11:51:15 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:34.805 11:51:15 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:34.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.805 --rc genhtml_branch_coverage=1 00:32:34.805 --rc genhtml_function_coverage=1 00:32:34.805 --rc genhtml_legend=1 00:32:34.805 --rc geninfo_all_blocks=1 00:32:34.805 --rc geninfo_unexecuted_blocks=1 00:32:34.805 00:32:34.805 ' 00:32:34.805 11:51:15 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:34.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.805 --rc genhtml_branch_coverage=1 00:32:34.805 --rc genhtml_function_coverage=1 00:32:34.805 --rc genhtml_legend=1 00:32:34.805 --rc geninfo_all_blocks=1 00:32:34.805 --rc geninfo_unexecuted_blocks=1 00:32:34.805 00:32:34.805 ' 00:32:34.805 11:51:15 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:34.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.806 --rc genhtml_branch_coverage=1 00:32:34.806 --rc genhtml_function_coverage=1 00:32:34.806 --rc genhtml_legend=1 00:32:34.806 --rc geninfo_all_blocks=1 00:32:34.806 --rc geninfo_unexecuted_blocks=1 00:32:34.806 00:32:34.806 ' 00:32:34.806 11:51:15 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:34.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.806 --rc genhtml_branch_coverage=1 00:32:34.806 --rc genhtml_function_coverage=1 00:32:34.806 --rc genhtml_legend=1 00:32:34.806 --rc geninfo_all_blocks=1 00:32:34.806 --rc geninfo_unexecuted_blocks=1 00:32:34.806 00:32:34.806 ' 00:32:34.806 11:51:15 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:34.806 11:51:15 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:32:34.806 11:51:15 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:34.806 11:51:15 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:34.806 11:51:15 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:34.806 11:51:15 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.806 11:51:15 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.806 11:51:15 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.806 11:51:15 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:34.806 11:51:15 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@51 -- # : 0 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:34.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:34.806 11:51:15 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:34.806 11:51:15 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:34.806 11:51:15 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:34.806 11:51:15 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:34.806 11:51:15 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:34.806 11:51:15 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.pbHjUNoyRQ 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pbHjUNoyRQ 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.pbHjUNoyRQ 00:32:34.806 11:51:15 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.pbHjUNoyRQ 00:32:34.806 11:51:15 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bKncUGYW8G 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:34.806 11:51:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bKncUGYW8G 00:32:34.806 11:51:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bKncUGYW8G 00:32:34.806 11:51:15 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.bKncUGYW8G 00:32:34.806 11:51:15 keyring_file -- keyring/file.sh@30 -- # tgtpid=3118852 00:32:34.806 11:51:15 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:34.806 11:51:15 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3118852 00:32:34.806 11:51:15 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3118852 ']' 00:32:34.806 11:51:15 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.806 11:51:15 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:34.806 11:51:15 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.807 11:51:15 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:34.807 11:51:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:34.807 [2024-11-15 11:51:15.209848] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:32:34.807 [2024-11-15 11:51:15.209942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3118852 ] 00:32:35.065 [2024-11-15 11:51:15.278022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.065 [2024-11-15 11:51:15.338116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.323 11:51:15 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:35.323 11:51:15 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:35.323 11:51:15 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:35.323 11:51:15 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.323 11:51:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:35.323 [2024-11-15 11:51:15.613014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:35.323 null0 00:32:35.323 [2024-11-15 11:51:15.645050] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:35.323 [2024-11-15 11:51:15.645605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:35.323 11:51:15 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.323 11:51:15 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:35.323 11:51:15 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:35.323 11:51:15 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:35.323 11:51:15 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:35.323 11:51:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:35.323 11:51:15 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:35.323 11:51:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:35.323 11:51:15 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:35.323 11:51:15 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.323 11:51:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:35.323 [2024-11-15 11:51:15.669077] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:35.323 request: 00:32:35.323 { 00:32:35.323 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:35.324 "secure_channel": false, 00:32:35.324 "listen_address": { 00:32:35.324 "trtype": "tcp", 00:32:35.324 "traddr": "127.0.0.1", 00:32:35.324 "trsvcid": "4420" 00:32:35.324 }, 00:32:35.324 "method": "nvmf_subsystem_add_listener", 00:32:35.324 "req_id": 1 00:32:35.324 } 00:32:35.324 Got JSON-RPC error response 00:32:35.324 response: 00:32:35.324 { 00:32:35.324 "code": -32602, 00:32:35.324 "message": "Invalid parameters" 00:32:35.324 } 00:32:35.324 11:51:15 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:35.324 11:51:15 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:35.324 11:51:15 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:35.324 11:51:15 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:35.324 11:51:15 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:35.324 11:51:15 keyring_file -- keyring/file.sh@47 -- # bperfpid=3118867 00:32:35.324 11:51:15 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:35.324 11:51:15 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3118867 /var/tmp/bperf.sock 00:32:35.324 11:51:15 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3118867 ']' 00:32:35.324 11:51:15 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:35.324 11:51:15 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:35.324 11:51:15 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:35.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:35.324 11:51:15 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:35.324 11:51:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:35.324 [2024-11-15 11:51:15.717091] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:32:35.324 [2024-11-15 11:51:15.717156] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3118867 ] 00:32:35.598 [2024-11-15 11:51:15.782916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.598 [2024-11-15 11:51:15.841565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.598 11:51:15 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:35.598 11:51:15 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:35.598 11:51:15 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pbHjUNoyRQ 00:32:35.598 11:51:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pbHjUNoyRQ 00:32:35.856 11:51:16 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.bKncUGYW8G 00:32:35.856 11:51:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.bKncUGYW8G 00:32:36.114 11:51:16 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:32:36.114 11:51:16 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:36.114 11:51:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:36.114 11:51:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:36.114 11:51:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:36.372 11:51:16 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.pbHjUNoyRQ == \/\t\m\p\/\t\m\p\.\p\b\H\j\U\N\o\y\R\Q ]] 00:32:36.372 11:51:16 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:32:36.372 11:51:16 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:32:36.372 11:51:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:36.372 11:51:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:36.372 11:51:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:36.631 11:51:17 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.bKncUGYW8G == \/\t\m\p\/\t\m\p\.\b\K\n\c\U\G\Y\W\8\G ]] 00:32:36.631 11:51:17 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:32:36.631 11:51:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:36.631 11:51:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:36.631 11:51:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:36.631 11:51:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:36.631 11:51:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:36.889 11:51:17 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:36.889 11:51:17 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:32:36.889 11:51:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:36.889 11:51:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:36.889 11:51:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:36.889 11:51:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:36.889 11:51:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:37.455 11:51:17 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:32:37.455 11:51:17 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:37.455 11:51:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:37.455 [2024-11-15 11:51:17.820180] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:37.713 nvme0n1 00:32:37.713 11:51:17 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:32:37.713 11:51:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:37.713 11:51:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:37.713 11:51:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:37.713 11:51:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:37.713 11:51:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:37.971 11:51:18 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:32:37.971 11:51:18 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:32:37.971 11:51:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:37.971 11:51:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:37.971 11:51:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:37.971 11:51:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:37.971 11:51:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:38.229 11:51:18 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:32:38.229 11:51:18 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:38.229 Running I/O for 1 seconds... 00:32:39.162 10433.00 IOPS, 40.75 MiB/s 00:32:39.162 Latency(us) 00:32:39.162 [2024-11-15T10:51:19.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.162 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:39.162 nvme0n1 : 1.01 10484.45 40.95 0.00 0.00 12171.34 3835.07 17670.45 00:32:39.162 [2024-11-15T10:51:19.589Z] =================================================================================================================== 00:32:39.162 [2024-11-15T10:51:19.589Z] Total : 10484.45 40.95 0.00 0.00 12171.34 3835.07 17670.45 00:32:39.162 { 00:32:39.162 "results": [ 00:32:39.162 { 00:32:39.162 "job": "nvme0n1", 00:32:39.162 "core_mask": "0x2", 00:32:39.162 "workload": "randrw", 00:32:39.162 "percentage": 50, 00:32:39.162 "status": "finished", 00:32:39.162 "queue_depth": 128, 00:32:39.162 "io_size": 4096, 00:32:39.162 "runtime": 1.007397, 00:32:39.162 "iops": 10484.44654887795, 00:32:39.162 "mibps": 40.95486933155449, 00:32:39.162 "io_failed": 0, 00:32:39.162 "io_timeout": 0, 00:32:39.162 "avg_latency_us": 12171.342408494464, 00:32:39.162 "min_latency_us": 3835.0696296296296, 00:32:39.162 "max_latency_us": 17670.447407407406 00:32:39.162 } 00:32:39.162 ], 00:32:39.162 "core_count": 1 00:32:39.162 } 00:32:39.420 11:51:19 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:39.420 11:51:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:39.678 11:51:19 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:32:39.678 11:51:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:39.678 11:51:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:39.678 11:51:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:39.678 11:51:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:39.678 11:51:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:39.936 11:51:20 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:39.936 11:51:20 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:32:39.936 11:51:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:39.936 11:51:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:39.936 11:51:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:39.936 11:51:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:39.936 11:51:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:40.193 11:51:20 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:32:40.193 11:51:20 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:40.193 11:51:20 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:40.193 11:51:20 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:40.193 11:51:20 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:40.193 11:51:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:40.193 11:51:20 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:40.194 11:51:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:40.194 11:51:20 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:40.194 11:51:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:40.451 [2024-11-15 11:51:20.685233] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:40.451 [2024-11-15 11:51:20.685872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf35510 (107): Transport endpoint is not connected 00:32:40.451 [2024-11-15 11:51:20.686863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf35510 (9): Bad file descriptor 00:32:40.451 [2024-11-15 11:51:20.687862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:32:40.451 [2024-11-15 11:51:20.687881] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:40.451 [2024-11-15 11:51:20.687894] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:32:40.451 [2024-11-15 11:51:20.687909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:32:40.451 request: 00:32:40.451 { 00:32:40.451 "name": "nvme0", 00:32:40.451 "trtype": "tcp", 00:32:40.451 "traddr": "127.0.0.1", 00:32:40.451 "adrfam": "ipv4", 00:32:40.451 "trsvcid": "4420", 00:32:40.451 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:40.451 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:40.451 "prchk_reftag": false, 00:32:40.451 "prchk_guard": false, 00:32:40.451 "hdgst": false, 00:32:40.451 "ddgst": false, 00:32:40.451 "psk": "key1", 00:32:40.451 "allow_unrecognized_csi": false, 00:32:40.451 "method": "bdev_nvme_attach_controller", 00:32:40.451 "req_id": 1 00:32:40.451 } 00:32:40.451 Got JSON-RPC error response 00:32:40.451 response: 00:32:40.451 { 00:32:40.451 "code": -5, 00:32:40.451 "message": "Input/output error" 00:32:40.451 } 00:32:40.451 11:51:20 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:40.451 11:51:20 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:40.451 11:51:20 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:40.451 11:51:20 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:40.451 11:51:20 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:32:40.451 11:51:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:40.451 11:51:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:40.451 11:51:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:40.451 11:51:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:40.451 11:51:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:40.709 11:51:20 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:40.709 11:51:20 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:32:40.709 11:51:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:40.709 11:51:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:40.709 11:51:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:40.709 11:51:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:40.709 11:51:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:40.966 11:51:21 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:32:40.966 11:51:21 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:32:40.966 11:51:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:41.224 11:51:21 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:32:41.225 11:51:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:41.482 11:51:21 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:32:41.482 11:51:21 keyring_file -- keyring/file.sh@78 -- # jq length 00:32:41.483 11:51:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:41.738 11:51:22 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:32:41.738 11:51:22 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.pbHjUNoyRQ 00:32:41.738 11:51:22 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.pbHjUNoyRQ 00:32:41.739 11:51:22 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:41.739 11:51:22 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.pbHjUNoyRQ 00:32:41.739 11:51:22 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:41.739 11:51:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:41.739 11:51:22 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:41.739 11:51:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:41.739 11:51:22 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pbHjUNoyRQ 00:32:41.739 11:51:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pbHjUNoyRQ 00:32:41.996 [2024-11-15 11:51:22.301831] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.pbHjUNoyRQ': 0100660 00:32:41.996 [2024-11-15 11:51:22.301864] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:41.996 request: 00:32:41.996 { 00:32:41.996 "name": "key0", 00:32:41.996 "path": "/tmp/tmp.pbHjUNoyRQ", 00:32:41.996 "method": "keyring_file_add_key", 00:32:41.996 "req_id": 1 00:32:41.996 } 00:32:41.996 Got JSON-RPC error response 00:32:41.996 response: 00:32:41.996 { 00:32:41.996 "code": -1, 00:32:41.996 "message": "Operation not permitted" 00:32:41.996 } 00:32:41.996 11:51:22 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:41.996 11:51:22 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:41.996 11:51:22 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:41.996 11:51:22 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:41.996 11:51:22 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.pbHjUNoyRQ 00:32:41.996 11:51:22 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.pbHjUNoyRQ 00:32:41.996 11:51:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.pbHjUNoyRQ 00:32:42.253 11:51:22 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.pbHjUNoyRQ 00:32:42.253 11:51:22 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:32:42.253 11:51:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:42.253 11:51:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:42.253 11:51:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:42.253 11:51:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:42.253 11:51:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:42.511 11:51:22 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:32:42.511 11:51:22 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:42.511 11:51:22 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:42.511 11:51:22 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:42.511 11:51:22 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:42.511 11:51:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:42.511 11:51:22 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:42.511 11:51:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:42.511 11:51:22 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:42.511 11:51:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:42.769 [2024-11-15 11:51:23.136153] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.pbHjUNoyRQ': No such file or directory 00:32:42.769 [2024-11-15 11:51:23.136186] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:42.769 [2024-11-15 11:51:23.136219] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:42.769 [2024-11-15 11:51:23.136231] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:32:42.769 [2024-11-15 11:51:23.136243] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:42.769 [2024-11-15 11:51:23.136254] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:42.769 request: 00:32:42.769 { 00:32:42.769 "name": "nvme0", 00:32:42.769 "trtype": "tcp", 00:32:42.769 "traddr": "127.0.0.1", 00:32:42.769 "adrfam": "ipv4", 00:32:42.769 "trsvcid": "4420", 00:32:42.769 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:42.769 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:42.769 "prchk_reftag": false, 00:32:42.769 "prchk_guard": false, 00:32:42.769 "hdgst": false, 00:32:42.769 "ddgst": false, 00:32:42.769 "psk": "key0", 00:32:42.769 "allow_unrecognized_csi": false, 00:32:42.769 "method": "bdev_nvme_attach_controller", 00:32:42.769 "req_id": 1 00:32:42.769 } 00:32:42.769 Got JSON-RPC error response 00:32:42.769 response: 00:32:42.769 { 00:32:42.769 "code": -19, 00:32:42.769 "message": "No such device" 00:32:42.769 } 00:32:42.769 11:51:23 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:42.769 11:51:23 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:42.769 11:51:23 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:42.769 11:51:23 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:42.769 11:51:23 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:32:42.769 11:51:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:43.026 11:51:23 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:43.026 11:51:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:43.026 11:51:23 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:43.026 11:51:23 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:43.026 11:51:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:43.026 11:51:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:43.026 11:51:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xnPCcxh1d1 00:32:43.026 11:51:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:43.026 11:51:23 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:43.026 11:51:23 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:43.026 11:51:23 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:43.026 11:51:23 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:43.026 11:51:23 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:43.026 11:51:23 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:43.284 11:51:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xnPCcxh1d1 00:32:43.284 11:51:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xnPCcxh1d1 00:32:43.284 11:51:23 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.xnPCcxh1d1 00:32:43.284 11:51:23 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xnPCcxh1d1 00:32:43.284 11:51:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xnPCcxh1d1 00:32:43.542 11:51:23 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:43.542 11:51:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:43.800 nvme0n1 00:32:43.800 11:51:24 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:32:43.800 11:51:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:43.800 11:51:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:43.800 11:51:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:43.800 11:51:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:43.800 11:51:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:44.057 11:51:24 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:32:44.057 11:51:24 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:32:44.057 11:51:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:44.315 11:51:24 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:32:44.315 11:51:24 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:32:44.315 11:51:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:44.315 11:51:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:44.315 11:51:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:44.572 11:51:24 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:32:44.572 11:51:24 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:32:44.572 11:51:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:44.572 11:51:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:44.572 11:51:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:44.572 11:51:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:44.572 11:51:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:44.829 11:51:25 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:32:44.829 11:51:25 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:44.829 11:51:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:45.087 11:51:25 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:32:45.087 11:51:25 keyring_file -- keyring/file.sh@105 -- # jq length 00:32:45.087 11:51:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:45.345 11:51:25 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:32:45.345 11:51:25 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xnPCcxh1d1 00:32:45.345 11:51:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xnPCcxh1d1 00:32:45.604 11:51:25 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.bKncUGYW8G 00:32:45.604 11:51:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.bKncUGYW8G 00:32:45.862 11:51:26 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:45.862 11:51:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:46.427 nvme0n1 00:32:46.427 11:51:26 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:32:46.427 11:51:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:46.685 11:51:26 keyring_file -- keyring/file.sh@113 -- # config='{ 00:32:46.685 "subsystems": [ 00:32:46.685 { 00:32:46.685 "subsystem": "keyring", 00:32:46.685 "config": [ 00:32:46.685 { 00:32:46.685 "method": "keyring_file_add_key", 00:32:46.685 "params": { 00:32:46.685 "name": "key0", 00:32:46.685 "path": "/tmp/tmp.xnPCcxh1d1" 00:32:46.685 } 00:32:46.685 }, 00:32:46.685 { 00:32:46.685 "method": "keyring_file_add_key", 00:32:46.685 "params": { 00:32:46.685 "name": "key1", 00:32:46.685 "path": "/tmp/tmp.bKncUGYW8G" 00:32:46.685 } 00:32:46.685 } 00:32:46.685 ] 00:32:46.685 }, 00:32:46.685 { 00:32:46.685 "subsystem": "iobuf", 00:32:46.685 "config": [ 00:32:46.685 { 00:32:46.685 "method": "iobuf_set_options", 00:32:46.686 "params": { 00:32:46.686 "small_pool_count": 8192, 00:32:46.686 "large_pool_count": 1024, 00:32:46.686 "small_bufsize": 8192, 00:32:46.686 "large_bufsize": 135168, 00:32:46.686 "enable_numa": false 00:32:46.686 } 00:32:46.686 } 00:32:46.686 ] 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "subsystem": "sock", 00:32:46.686 "config": [ 00:32:46.686 { 00:32:46.686 "method": "sock_set_default_impl", 00:32:46.686 "params": { 00:32:46.686 "impl_name": "posix" 00:32:46.686 } 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "method": "sock_impl_set_options", 00:32:46.686 "params": { 00:32:46.686 "impl_name": "ssl", 00:32:46.686 "recv_buf_size": 4096, 00:32:46.686 "send_buf_size": 4096, 00:32:46.686 "enable_recv_pipe": true, 00:32:46.686 "enable_quickack": false, 00:32:46.686 "enable_placement_id": 0, 00:32:46.686 "enable_zerocopy_send_server": true, 00:32:46.686 "enable_zerocopy_send_client": false, 00:32:46.686 "zerocopy_threshold": 0, 00:32:46.686 "tls_version": 0, 00:32:46.686 "enable_ktls": false 00:32:46.686 } 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "method": "sock_impl_set_options", 00:32:46.686 "params": { 00:32:46.686 "impl_name": "posix", 00:32:46.686 "recv_buf_size": 2097152, 00:32:46.686 "send_buf_size": 2097152, 00:32:46.686 "enable_recv_pipe": true, 00:32:46.686 "enable_quickack": false, 00:32:46.686 "enable_placement_id": 0, 00:32:46.686 "enable_zerocopy_send_server": true, 00:32:46.686 "enable_zerocopy_send_client": false, 00:32:46.686 "zerocopy_threshold": 0, 00:32:46.686 "tls_version": 0, 00:32:46.686 "enable_ktls": false 00:32:46.686 } 00:32:46.686 } 00:32:46.686 ] 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "subsystem": "vmd", 00:32:46.686 "config": [] 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "subsystem": "accel", 00:32:46.686 "config": [ 00:32:46.686 { 00:32:46.686 "method": "accel_set_options", 00:32:46.686 "params": { 00:32:46.686 "small_cache_size": 128, 00:32:46.686 "large_cache_size": 16, 00:32:46.686 "task_count": 2048, 00:32:46.686 "sequence_count": 2048, 00:32:46.686 "buf_count": 2048 00:32:46.686 } 00:32:46.686 } 00:32:46.686 ] 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "subsystem": "bdev", 00:32:46.686 "config": [ 00:32:46.686 { 00:32:46.686 "method": "bdev_set_options", 00:32:46.686 "params": { 00:32:46.686 "bdev_io_pool_size": 65535, 00:32:46.686 "bdev_io_cache_size": 256, 00:32:46.686 "bdev_auto_examine": true, 00:32:46.686 "iobuf_small_cache_size": 128, 00:32:46.686 "iobuf_large_cache_size": 16 00:32:46.686 } 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "method": "bdev_raid_set_options", 00:32:46.686 "params": { 00:32:46.686 "process_window_size_kb": 1024, 00:32:46.686 "process_max_bandwidth_mb_sec": 0 00:32:46.686 } 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "method": "bdev_iscsi_set_options", 00:32:46.686 "params": { 00:32:46.686 "timeout_sec": 30 00:32:46.686 } 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "method": "bdev_nvme_set_options", 00:32:46.686 "params": { 00:32:46.686 "action_on_timeout": "none", 00:32:46.686 "timeout_us": 0, 00:32:46.686 "timeout_admin_us": 0, 00:32:46.686 "keep_alive_timeout_ms": 10000, 00:32:46.686 "arbitration_burst": 0, 00:32:46.686 "low_priority_weight": 0, 00:32:46.686 "medium_priority_weight": 0, 00:32:46.686 "high_priority_weight": 0, 00:32:46.686 "nvme_adminq_poll_period_us": 10000, 00:32:46.686 "nvme_ioq_poll_period_us": 0, 00:32:46.686 "io_queue_requests": 512, 00:32:46.686 "delay_cmd_submit": true, 00:32:46.686 "transport_retry_count": 4, 00:32:46.686 "bdev_retry_count": 3, 00:32:46.686 "transport_ack_timeout": 0, 00:32:46.686 "ctrlr_loss_timeout_sec": 0, 00:32:46.686 "reconnect_delay_sec": 0, 00:32:46.686 "fast_io_fail_timeout_sec": 0, 00:32:46.686 "disable_auto_failback": false, 00:32:46.686 "generate_uuids": false, 00:32:46.686 "transport_tos": 0, 00:32:46.686 "nvme_error_stat": false, 00:32:46.686 "rdma_srq_size": 0, 00:32:46.686 "io_path_stat": false, 00:32:46.686 "allow_accel_sequence": false, 00:32:46.686 "rdma_max_cq_size": 0, 00:32:46.686 "rdma_cm_event_timeout_ms": 0, 00:32:46.686 "dhchap_digests": [ 00:32:46.686 "sha256", 00:32:46.686 "sha384", 00:32:46.686 "sha512" 00:32:46.686 ], 00:32:46.686 "dhchap_dhgroups": [ 00:32:46.686 "null", 00:32:46.686 "ffdhe2048", 00:32:46.686 "ffdhe3072", 00:32:46.686 "ffdhe4096", 00:32:46.686 "ffdhe6144", 00:32:46.686 "ffdhe8192" 00:32:46.686 ] 00:32:46.686 } 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "method": "bdev_nvme_attach_controller", 00:32:46.686 "params": { 00:32:46.686 "name": "nvme0", 00:32:46.686 "trtype": "TCP", 00:32:46.686 "adrfam": "IPv4", 00:32:46.686 "traddr": "127.0.0.1", 00:32:46.686 "trsvcid": "4420", 00:32:46.686 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:46.686 "prchk_reftag": false, 00:32:46.686 "prchk_guard": false, 00:32:46.686 "ctrlr_loss_timeout_sec": 0, 00:32:46.686 "reconnect_delay_sec": 0, 00:32:46.686 "fast_io_fail_timeout_sec": 0, 00:32:46.686 "psk": "key0", 00:32:46.686 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:46.686 "hdgst": false, 00:32:46.686 "ddgst": false, 00:32:46.686 "multipath": "multipath" 00:32:46.686 } 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "method": "bdev_nvme_set_hotplug", 00:32:46.686 "params": { 00:32:46.686 "period_us": 100000, 00:32:46.686 "enable": false 00:32:46.686 } 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "method": "bdev_wait_for_examine" 00:32:46.686 } 00:32:46.686 ] 00:32:46.686 }, 00:32:46.686 { 00:32:46.686 "subsystem": "nbd", 00:32:46.686 "config": [] 00:32:46.686 } 00:32:46.686 ] 00:32:46.686 }' 00:32:46.686 11:51:26 keyring_file -- keyring/file.sh@115 -- # killprocess 3118867 00:32:46.686 11:51:26 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3118867 ']' 00:32:46.686 11:51:26 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3118867 00:32:46.686 11:51:26 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:46.686 11:51:26 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:46.686 11:51:26 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3118867 00:32:46.686 11:51:26 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:46.686 11:51:26 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:46.686 11:51:26 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3118867' 00:32:46.686 killing process with pid 3118867 00:32:46.686 11:51:26 keyring_file -- common/autotest_common.sh@973 -- # kill 3118867 00:32:46.686 Received shutdown signal, test time was about 1.000000 seconds 00:32:46.686 00:32:46.686 Latency(us) 00:32:46.686 [2024-11-15T10:51:27.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.686 [2024-11-15T10:51:27.113Z] =================================================================================================================== 00:32:46.686 [2024-11-15T10:51:27.113Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:46.686 11:51:26 keyring_file -- common/autotest_common.sh@978 -- # wait 3118867 00:32:46.945 11:51:27 keyring_file -- keyring/file.sh@118 -- # bperfpid=3120348 00:32:46.945 11:51:27 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3120348 /var/tmp/bperf.sock 00:32:46.945 11:51:27 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3120348 ']' 00:32:46.945 11:51:27 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:46.945 11:51:27 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:46.945 11:51:27 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:46.945 11:51:27 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:46.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:46.945 11:51:27 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:32:46.945 "subsystems": [ 00:32:46.945 { 00:32:46.945 "subsystem": "keyring", 00:32:46.945 "config": [ 00:32:46.945 { 00:32:46.945 "method": "keyring_file_add_key", 00:32:46.945 "params": { 00:32:46.945 "name": "key0", 00:32:46.945 "path": "/tmp/tmp.xnPCcxh1d1" 00:32:46.945 } 00:32:46.945 }, 00:32:46.945 { 00:32:46.945 "method": "keyring_file_add_key", 00:32:46.945 "params": { 00:32:46.945 "name": "key1", 00:32:46.945 "path": "/tmp/tmp.bKncUGYW8G" 00:32:46.945 } 00:32:46.945 } 00:32:46.945 ] 00:32:46.945 }, 00:32:46.945 { 00:32:46.945 "subsystem": "iobuf", 00:32:46.945 "config": [ 00:32:46.945 { 00:32:46.945 "method": "iobuf_set_options", 00:32:46.945 "params": { 00:32:46.945 "small_pool_count": 8192, 00:32:46.945 "large_pool_count": 1024, 00:32:46.945 "small_bufsize": 8192, 00:32:46.945 "large_bufsize": 135168, 00:32:46.945 "enable_numa": false 00:32:46.945 } 00:32:46.945 } 00:32:46.945 ] 00:32:46.945 }, 00:32:46.945 { 00:32:46.945 "subsystem": "sock", 00:32:46.945 "config": [ 00:32:46.945 { 00:32:46.945 "method": "sock_set_default_impl", 00:32:46.945 "params": { 00:32:46.945 "impl_name": "posix" 00:32:46.945 } 00:32:46.945 }, 00:32:46.945 { 00:32:46.945 "method": "sock_impl_set_options", 00:32:46.945 "params": { 00:32:46.945 "impl_name": "ssl", 00:32:46.945 "recv_buf_size": 4096, 00:32:46.945 "send_buf_size": 4096, 00:32:46.945 "enable_recv_pipe": true, 00:32:46.945 "enable_quickack": false, 00:32:46.945 "enable_placement_id": 0, 00:32:46.945 "enable_zerocopy_send_server": true, 00:32:46.945 "enable_zerocopy_send_client": false, 00:32:46.945 "zerocopy_threshold": 0, 00:32:46.945 "tls_version": 0, 00:32:46.945 "enable_ktls": false 00:32:46.945 } 00:32:46.945 }, 00:32:46.945 { 00:32:46.945 "method": "sock_impl_set_options", 00:32:46.945 "params": { 00:32:46.945 "impl_name": "posix", 00:32:46.945 "recv_buf_size": 2097152, 00:32:46.945 "send_buf_size": 2097152, 00:32:46.945 "enable_recv_pipe": true, 00:32:46.945 "enable_quickack": false, 00:32:46.945 "enable_placement_id": 0, 00:32:46.945 "enable_zerocopy_send_server": true, 00:32:46.945 "enable_zerocopy_send_client": false, 00:32:46.945 "zerocopy_threshold": 0, 00:32:46.945 "tls_version": 0, 00:32:46.945 "enable_ktls": false 00:32:46.945 } 00:32:46.945 } 00:32:46.945 ] 00:32:46.945 }, 00:32:46.945 { 00:32:46.945 "subsystem": "vmd", 00:32:46.945 "config": [] 00:32:46.945 }, 00:32:46.945 { 00:32:46.945 "subsystem": "accel", 00:32:46.945 "config": [ 00:32:46.945 { 00:32:46.945 "method": "accel_set_options", 00:32:46.945 "params": { 00:32:46.945 "small_cache_size": 128, 00:32:46.945 "large_cache_size": 16, 00:32:46.945 "task_count": 2048, 00:32:46.945 "sequence_count": 2048, 00:32:46.945 "buf_count": 2048 00:32:46.945 } 00:32:46.945 } 00:32:46.945 ] 00:32:46.945 }, 00:32:46.945 { 00:32:46.945 "subsystem": "bdev", 00:32:46.945 "config": [ 00:32:46.945 { 00:32:46.945 "method": "bdev_set_options", 00:32:46.945 "params": { 00:32:46.945 "bdev_io_pool_size": 65535, 00:32:46.945 "bdev_io_cache_size": 256, 00:32:46.945 "bdev_auto_examine": true, 00:32:46.945 "iobuf_small_cache_size": 128, 00:32:46.945 "iobuf_large_cache_size": 16 00:32:46.945 } 00:32:46.945 }, 00:32:46.945 { 00:32:46.945 "method": "bdev_raid_set_options", 00:32:46.945 "params": { 00:32:46.945 "process_window_size_kb": 1024, 00:32:46.945 "process_max_bandwidth_mb_sec": 0 00:32:46.945 } 00:32:46.945 }, 00:32:46.945 { 00:32:46.945 "method": "bdev_iscsi_set_options", 00:32:46.945 "params": { 00:32:46.945 "timeout_sec": 30 00:32:46.945 } 00:32:46.945 }, 00:32:46.945 { 00:32:46.945 "method": "bdev_nvme_set_options", 00:32:46.945 "params": { 00:32:46.945 "action_on_timeout": "none", 00:32:46.945 "timeout_us": 0, 00:32:46.945 "timeout_admin_us": 0, 00:32:46.945 "keep_alive_timeout_ms": 10000, 00:32:46.945 "arbitration_burst": 0, 00:32:46.945 "low_priority_weight": 0, 00:32:46.945 "medium_priority_weight": 0, 00:32:46.945 "high_priority_weight": 0, 00:32:46.945 "nvme_adminq_poll_period_us": 10000, 00:32:46.945 "nvme_ioq_poll_period_us": 0, 00:32:46.945 "io_queue_requests": 512, 00:32:46.945 "delay_cmd_submit": true, 00:32:46.945 "transport_retry_count": 4, 00:32:46.945 "bdev_retry_count": 3, 00:32:46.945 "transport_ack_timeout": 0, 00:32:46.945 "ctrlr_loss_timeout_sec": 0, 00:32:46.945 "reconnect_delay_sec": 0, 00:32:46.945 "fast_io_fail_timeout_sec": 0, 00:32:46.945 "disable_auto_failback": false, 00:32:46.945 "generate_uuids": false, 00:32:46.945 "transport_tos": 0, 00:32:46.945 "nvme_error_stat": false, 00:32:46.945 "rdma_srq_size": 0, 00:32:46.945 "io_path_stat": false, 00:32:46.945 "allow_accel_sequence": false, 00:32:46.945 "rdma_max_cq_size": 0, 00:32:46.945 "rdma_cm_event_timeout_ms": 0, 00:32:46.945 "dhchap_digests": [ 00:32:46.945 "sha256", 00:32:46.945 "sha384", 00:32:46.945 "sha512" 00:32:46.945 ], 00:32:46.945 "dhchap_dhgroups": [ 00:32:46.945 "null", 00:32:46.945 "ffdhe2048", 00:32:46.945 "ffdhe3072", 00:32:46.945 "ffdhe4096", 00:32:46.945 "ffdhe6144", 00:32:46.945 "ffdhe8192" 00:32:46.945 ] 00:32:46.945 } 00:32:46.945 }, 00:32:46.945 { 00:32:46.945 "method": "bdev_nvme_attach_controller", 00:32:46.945 "params": { 00:32:46.945 "name": "nvme0", 00:32:46.945 "trtype": "TCP", 00:32:46.945 "adrfam": "IPv4", 00:32:46.945 "traddr": "127.0.0.1", 00:32:46.945 "trsvcid": "4420", 00:32:46.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:46.945 "prchk_reftag": false, 00:32:46.945 "prchk_guard": false, 00:32:46.945 "ctrlr_loss_timeout_sec": 0, 00:32:46.945 "reconnect_delay_sec": 0, 00:32:46.945 "fast_io_fail_timeout_sec": 0, 00:32:46.945 "psk": "key0", 00:32:46.945 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:46.945 "hdgst": false, 00:32:46.945 "ddgst": false, 00:32:46.945 "multipath": "multipath" 00:32:46.945 } 00:32:46.945 }, 00:32:46.945 { 00:32:46.945 "method": "bdev_nvme_set_hotplug", 00:32:46.945 "params": { 00:32:46.945 "period_us": 100000, 00:32:46.945 "enable": false 00:32:46.945 } 00:32:46.945 }, 00:32:46.945 { 00:32:46.945 "method": "bdev_wait_for_examine" 00:32:46.945 } 00:32:46.945 ] 00:32:46.945 }, 00:32:46.945 { 00:32:46.945 "subsystem": "nbd", 00:32:46.945 "config": [] 00:32:46.945 } 00:32:46.945 ] 00:32:46.945 }' 00:32:46.945 11:51:27 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:46.945 11:51:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:46.945 [2024-11-15 11:51:27.225917] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:32:46.945 [2024-11-15 11:51:27.225988] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3120348 ] 00:32:46.945 [2024-11-15 11:51:27.295219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.945 [2024-11-15 11:51:27.353274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.204 [2024-11-15 11:51:27.542839] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:47.461 11:51:27 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.461 11:51:27 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:47.461 11:51:27 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:32:47.461 11:51:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:47.461 11:51:27 keyring_file -- keyring/file.sh@121 -- # jq length 00:32:47.719 11:51:27 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:47.719 11:51:27 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:32:47.719 11:51:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:47.719 11:51:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:47.719 11:51:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:47.719 11:51:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:47.719 11:51:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:47.977 11:51:28 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:32:47.977 11:51:28 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:32:47.977 11:51:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:47.977 11:51:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:47.977 11:51:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:47.977 11:51:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:47.977 11:51:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:48.235 11:51:28 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:32:48.235 11:51:28 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:32:48.235 11:51:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:48.235 11:51:28 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:32:48.493 11:51:28 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:32:48.493 11:51:28 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:48.493 11:51:28 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.xnPCcxh1d1 /tmp/tmp.bKncUGYW8G 00:32:48.493 11:51:28 keyring_file -- keyring/file.sh@20 -- # killprocess 3120348 00:32:48.493 11:51:28 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3120348 ']' 00:32:48.493 11:51:28 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3120348 00:32:48.493 11:51:28 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:48.493 11:51:28 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:48.493 11:51:28 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3120348 00:32:48.493 11:51:28 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:48.493 11:51:28 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:48.493 11:51:28 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3120348' 00:32:48.493 killing process with pid 3120348 00:32:48.493 11:51:28 keyring_file -- common/autotest_common.sh@973 -- # kill 3120348 00:32:48.493 Received shutdown signal, test time was about 1.000000 seconds 00:32:48.493 00:32:48.493 Latency(us) 00:32:48.493 [2024-11-15T10:51:28.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.493 [2024-11-15T10:51:28.920Z] =================================================================================================================== 00:32:48.493 [2024-11-15T10:51:28.920Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:48.493 11:51:28 keyring_file -- common/autotest_common.sh@978 -- # wait 3120348 00:32:48.751 11:51:28 keyring_file -- keyring/file.sh@21 -- # killprocess 3118852 00:32:48.751 11:51:28 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3118852 ']' 00:32:48.751 11:51:28 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3118852 00:32:48.751 11:51:28 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:48.751 11:51:28 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:48.751 11:51:29 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3118852 00:32:48.751 11:51:29 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:48.751 11:51:29 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:48.751 11:51:29 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3118852' 00:32:48.751 killing process with pid 3118852 00:32:48.751 11:51:29 keyring_file -- common/autotest_common.sh@973 -- # kill 3118852 00:32:48.751 11:51:29 keyring_file -- common/autotest_common.sh@978 -- # wait 3118852 00:32:49.009 00:32:49.009 real 0m14.496s 00:32:49.009 user 0m36.952s 00:32:49.009 sys 0m3.200s 00:32:49.009 11:51:29 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:49.009 11:51:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:49.009 ************************************ 00:32:49.009 END TEST keyring_file 00:32:49.009 ************************************ 00:32:49.269 11:51:29 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:32:49.269 11:51:29 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:49.269 11:51:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:49.269 11:51:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:49.269 11:51:29 -- common/autotest_common.sh@10 -- # set +x 00:32:49.269 ************************************ 00:32:49.269 START TEST keyring_linux 00:32:49.269 ************************************ 00:32:49.269 11:51:29 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:49.269 Joined session keyring: 219047304 00:32:49.269 * Looking for test storage... 00:32:49.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:49.269 11:51:29 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:49.269 11:51:29 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:32:49.269 11:51:29 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:49.269 11:51:29 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@345 -- # : 1 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.269 11:51:29 keyring_linux -- scripts/common.sh@368 -- # return 0 00:32:49.269 11:51:29 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.269 11:51:29 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:49.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.269 --rc genhtml_branch_coverage=1 00:32:49.269 --rc genhtml_function_coverage=1 00:32:49.269 --rc genhtml_legend=1 00:32:49.269 --rc geninfo_all_blocks=1 00:32:49.269 --rc geninfo_unexecuted_blocks=1 00:32:49.269 00:32:49.269 ' 00:32:49.269 11:51:29 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:49.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.269 --rc genhtml_branch_coverage=1 00:32:49.269 --rc genhtml_function_coverage=1 00:32:49.269 --rc genhtml_legend=1 00:32:49.269 --rc geninfo_all_blocks=1 00:32:49.269 --rc geninfo_unexecuted_blocks=1 00:32:49.269 00:32:49.269 ' 00:32:49.269 11:51:29 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:49.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.269 --rc genhtml_branch_coverage=1 00:32:49.269 --rc genhtml_function_coverage=1 00:32:49.269 --rc genhtml_legend=1 00:32:49.269 --rc geninfo_all_blocks=1 00:32:49.269 --rc geninfo_unexecuted_blocks=1 00:32:49.269 00:32:49.269 ' 00:32:49.269 11:51:29 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:49.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.269 --rc genhtml_branch_coverage=1 00:32:49.269 --rc genhtml_function_coverage=1 00:32:49.269 --rc genhtml_legend=1 00:32:49.269 --rc geninfo_all_blocks=1 00:32:49.269 --rc geninfo_unexecuted_blocks=1 00:32:49.269 00:32:49.269 ' 00:32:49.269 11:51:29 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:49.269 11:51:29 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:49.269 11:51:29 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:49.269 11:51:29 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.269 11:51:29 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.269 11:51:29 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.269 11:51:29 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.269 11:51:29 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.269 11:51:29 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.269 11:51:29 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:49.270 11:51:29 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:32:49.270 11:51:29 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.270 11:51:29 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.270 11:51:29 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.270 11:51:29 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.270 11:51:29 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.270 11:51:29 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.270 11:51:29 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:49.270 11:51:29 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:49.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:49.270 11:51:29 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:49.270 11:51:29 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:49.270 11:51:29 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:49.270 11:51:29 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:49.270 11:51:29 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:49.270 11:51:29 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:49.270 11:51:29 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:49.270 11:51:29 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:49.270 11:51:29 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:49.270 11:51:29 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:49.270 11:51:29 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:49.270 11:51:29 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:49.270 11:51:29 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@733 -- # python - 00:32:49.270 11:51:29 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:49.270 11:51:29 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:49.270 /tmp/:spdk-test:key0 00:32:49.270 11:51:29 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:49.270 11:51:29 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:49.270 11:51:29 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:49.270 11:51:29 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:49.270 11:51:29 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:49.270 11:51:29 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:49.270 11:51:29 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:32:49.270 11:51:29 keyring_linux -- nvmf/common.sh@733 -- # python - 00:32:49.528 11:51:29 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:49.528 11:51:29 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:49.528 /tmp/:spdk-test:key1 00:32:49.528 11:51:29 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3120809 00:32:49.528 11:51:29 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:49.528 11:51:29 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3120809 00:32:49.528 11:51:29 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3120809 ']' 00:32:49.528 11:51:29 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:49.528 11:51:29 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:49.528 11:51:29 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:49.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:49.528 11:51:29 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:49.528 11:51:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:49.528 [2024-11-15 11:51:29.761544] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:32:49.528 [2024-11-15 11:51:29.761651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3120809 ] 00:32:49.528 [2024-11-15 11:51:29.832233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.528 [2024-11-15 11:51:29.892177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.786 11:51:30 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:49.786 11:51:30 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:32:49.786 11:51:30 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:49.786 11:51:30 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.786 11:51:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:49.786 [2024-11-15 11:51:30.179267] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.786 null0 00:32:50.044 [2024-11-15 11:51:30.211388] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:50.044 [2024-11-15 11:51:30.211921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:50.044 11:51:30 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.044 11:51:30 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:50.044 497014505 00:32:50.044 11:51:30 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:50.044 689556225 00:32:50.044 11:51:30 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3120824 00:32:50.044 11:51:30 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:50.044 11:51:30 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3120824 /var/tmp/bperf.sock 00:32:50.044 11:51:30 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3120824 ']' 00:32:50.044 11:51:30 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:50.044 11:51:30 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:50.044 11:51:30 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:50.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:50.044 11:51:30 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:50.044 11:51:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:50.044 [2024-11-15 11:51:30.278982] Starting SPDK v25.01-pre git sha1 8531656d3 / DPDK 24.03.0 initialization... 00:32:50.044 [2024-11-15 11:51:30.279046] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3120824 ] 00:32:50.044 [2024-11-15 11:51:30.345205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.045 [2024-11-15 11:51:30.403514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.302 11:51:30 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:50.302 11:51:30 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:32:50.302 11:51:30 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:50.302 11:51:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:50.560 11:51:30 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:50.560 11:51:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:50.817 11:51:31 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:50.817 11:51:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:51.075 [2024-11-15 11:51:31.390252] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:51.075 nvme0n1 00:32:51.075 11:51:31 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:51.075 11:51:31 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:51.075 11:51:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:51.075 11:51:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:51.075 11:51:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:51.075 11:51:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:51.332 11:51:31 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:51.332 11:51:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:51.332 11:51:31 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:51.332 11:51:31 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:51.332 11:51:31 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:51.332 11:51:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:51.332 11:51:31 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:51.590 11:51:32 keyring_linux -- keyring/linux.sh@25 -- # sn=497014505 00:32:51.590 11:51:32 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:51.590 11:51:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:51.848 11:51:32 keyring_linux -- keyring/linux.sh@26 -- # [[ 497014505 == \4\9\7\0\1\4\5\0\5 ]] 00:32:51.848 11:51:32 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 497014505 00:32:51.848 11:51:32 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:51.848 11:51:32 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:51.848 Running I/O for 1 seconds... 00:32:52.784 10402.00 IOPS, 40.63 MiB/s 00:32:52.784 Latency(us) 00:32:52.784 [2024-11-15T10:51:33.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.784 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:52.784 nvme0n1 : 1.01 10399.31 40.62 0.00 0.00 12227.02 3956.43 15146.10 00:32:52.784 [2024-11-15T10:51:33.211Z] =================================================================================================================== 00:32:52.784 [2024-11-15T10:51:33.211Z] Total : 10399.31 40.62 0.00 0.00 12227.02 3956.43 15146.10 00:32:52.784 { 00:32:52.784 "results": [ 00:32:52.784 { 00:32:52.784 "job": "nvme0n1", 00:32:52.784 "core_mask": "0x2", 00:32:52.784 "workload": "randread", 00:32:52.784 "status": "finished", 00:32:52.784 "queue_depth": 128, 00:32:52.784 "io_size": 4096, 00:32:52.784 "runtime": 1.012567, 00:32:52.784 "iops": 10399.311848006108, 00:32:52.784 "mibps": 40.62231190627386, 00:32:52.784 "io_failed": 0, 00:32:52.784 "io_timeout": 0, 00:32:52.784 "avg_latency_us": 12227.0153992473, 00:32:52.784 "min_latency_us": 3956.4325925925928, 00:32:52.784 "max_latency_us": 15146.097777777777 00:32:52.784 } 00:32:52.784 ], 00:32:52.784 "core_count": 1 00:32:52.784 } 00:32:52.784 11:51:33 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:52.784 11:51:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:53.042 11:51:33 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:53.042 11:51:33 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:53.042 11:51:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:53.042 11:51:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:53.042 11:51:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:53.042 11:51:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.300 11:51:33 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:53.300 11:51:33 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:53.300 11:51:33 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:53.300 11:51:33 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:53.300 11:51:33 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:32:53.300 11:51:33 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:53.300 11:51:33 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:53.300 11:51:33 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:53.300 11:51:33 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:53.300 11:51:33 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:53.300 11:51:33 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:53.300 11:51:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:53.559 [2024-11-15 11:51:33.975643] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:53.559 [2024-11-15 11:51:33.976608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aedbc0 (107): Transport endpoint is not connected 00:32:53.559 [2024-11-15 11:51:33.977600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aedbc0 (9): Bad file descriptor 00:32:53.559 [2024-11-15 11:51:33.978599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:32:53.559 [2024-11-15 11:51:33.978620] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:53.559 [2024-11-15 11:51:33.978633] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:32:53.559 [2024-11-15 11:51:33.978654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:32:53.559 request: 00:32:53.559 { 00:32:53.559 "name": "nvme0", 00:32:53.559 "trtype": "tcp", 00:32:53.559 "traddr": "127.0.0.1", 00:32:53.559 "adrfam": "ipv4", 00:32:53.559 "trsvcid": "4420", 00:32:53.559 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:53.559 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:53.559 "prchk_reftag": false, 00:32:53.559 "prchk_guard": false, 00:32:53.559 "hdgst": false, 00:32:53.559 "ddgst": false, 00:32:53.559 "psk": ":spdk-test:key1", 00:32:53.559 "allow_unrecognized_csi": false, 00:32:53.559 "method": "bdev_nvme_attach_controller", 00:32:53.559 "req_id": 1 00:32:53.559 } 00:32:53.559 Got JSON-RPC error response 00:32:53.559 response: 00:32:53.559 { 00:32:53.559 "code": -5, 00:32:53.559 "message": "Input/output error" 00:32:53.559 } 00:32:53.817 11:51:33 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:32:53.817 11:51:33 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:53.817 11:51:33 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:53.817 11:51:33 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:53.817 11:51:33 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:53.817 11:51:33 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:53.817 11:51:33 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:53.817 11:51:33 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:53.817 11:51:33 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:53.817 11:51:33 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:53.817 11:51:33 keyring_linux -- keyring/linux.sh@33 -- # sn=497014505 00:32:53.817 11:51:33 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 497014505 00:32:53.817 1 links removed 00:32:53.817 11:51:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:53.817 11:51:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:53.817 11:51:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:53.817 11:51:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:53.817 11:51:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:53.817 11:51:34 keyring_linux -- keyring/linux.sh@33 -- # sn=689556225 00:32:53.817 11:51:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 689556225 00:32:53.817 1 links removed 00:32:53.817 11:51:34 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3120824 00:32:53.817 11:51:34 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3120824 ']' 00:32:53.817 11:51:34 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3120824 00:32:53.817 11:51:34 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:32:53.817 11:51:34 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:53.817 11:51:34 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3120824 00:32:53.817 11:51:34 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:53.817 11:51:34 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:53.817 11:51:34 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3120824' 00:32:53.817 killing process with pid 3120824 00:32:53.817 11:51:34 keyring_linux -- common/autotest_common.sh@973 -- # kill 3120824 00:32:53.817 Received shutdown signal, test time was about 1.000000 seconds 00:32:53.817 00:32:53.817 Latency(us) 00:32:53.817 [2024-11-15T10:51:34.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.817 [2024-11-15T10:51:34.244Z] =================================================================================================================== 00:32:53.817 [2024-11-15T10:51:34.244Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:53.817 11:51:34 keyring_linux -- common/autotest_common.sh@978 -- # wait 3120824 00:32:54.077 11:51:34 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3120809 00:32:54.077 11:51:34 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3120809 ']' 00:32:54.077 11:51:34 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3120809 00:32:54.077 11:51:34 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:32:54.077 11:51:34 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:54.077 11:51:34 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3120809 00:32:54.077 11:51:34 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:54.077 11:51:34 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:54.077 11:51:34 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3120809' 00:32:54.077 killing process with pid 3120809 00:32:54.077 11:51:34 keyring_linux -- common/autotest_common.sh@973 -- # kill 3120809 00:32:54.077 11:51:34 keyring_linux -- common/autotest_common.sh@978 -- # wait 3120809 00:32:54.383 00:32:54.383 real 0m5.257s 00:32:54.383 user 0m10.459s 00:32:54.383 sys 0m1.583s 00:32:54.383 11:51:34 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:54.383 11:51:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:54.383 ************************************ 00:32:54.383 END TEST keyring_linux 00:32:54.383 ************************************ 00:32:54.383 11:51:34 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:54.383 11:51:34 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:54.383 11:51:34 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:54.383 11:51:34 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:54.383 11:51:34 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:54.383 11:51:34 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:54.383 11:51:34 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:54.383 11:51:34 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:54.383 11:51:34 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:54.383 11:51:34 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:54.383 11:51:34 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:54.383 11:51:34 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:54.383 11:51:34 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:54.383 11:51:34 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:54.383 11:51:34 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:32:54.383 11:51:34 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:32:54.383 11:51:34 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:32:54.383 11:51:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:54.383 11:51:34 -- common/autotest_common.sh@10 -- # set +x 00:32:54.383 11:51:34 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:32:54.383 11:51:34 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:32:54.383 11:51:34 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:32:54.383 11:51:34 -- common/autotest_common.sh@10 -- # set +x 00:32:56.345 INFO: APP EXITING 00:32:56.345 INFO: killing all VMs 00:32:56.345 INFO: killing vhost app 00:32:56.345 INFO: EXIT DONE 00:32:57.281 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:57.281 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:57.281 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:57.539 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:57.539 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:57.539 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:57.539 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:57.539 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:57.539 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:32:57.539 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:57.539 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:57.539 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:57.539 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:57.539 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:57.539 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:57.539 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:57.539 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:58.914 Cleaning 00:32:58.914 Removing: /var/run/dpdk/spdk0/config 00:32:58.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:58.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:58.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:58.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:58.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:58.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:58.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:58.914 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:58.914 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:58.914 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:58.914 Removing: /var/run/dpdk/spdk1/config 00:32:58.914 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:58.914 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:58.914 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:58.914 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:58.914 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:58.914 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:58.914 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:58.914 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:58.914 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:58.914 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:58.914 Removing: /var/run/dpdk/spdk2/config 00:32:58.914 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:58.914 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:58.914 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:58.914 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:58.914 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:58.914 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:58.914 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:58.914 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:58.914 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:58.914 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:58.914 Removing: /var/run/dpdk/spdk3/config 00:32:58.914 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:59.173 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:59.173 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:59.173 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:59.173 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:59.173 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:59.173 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:59.173 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:59.173 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:59.173 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:59.173 Removing: /var/run/dpdk/spdk4/config 00:32:59.173 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:59.173 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:59.173 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:59.173 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:59.173 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:59.173 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:59.173 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:59.173 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:59.173 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:59.173 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:59.173 Removing: /dev/shm/bdev_svc_trace.1 00:32:59.173 Removing: /dev/shm/nvmf_trace.0 00:32:59.173 Removing: /dev/shm/spdk_tgt_trace.pid2799004 00:32:59.173 Removing: /var/run/dpdk/spdk0 00:32:59.173 Removing: /var/run/dpdk/spdk1 00:32:59.173 Removing: /var/run/dpdk/spdk2 00:32:59.173 Removing: /var/run/dpdk/spdk3 00:32:59.173 Removing: /var/run/dpdk/spdk4 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2797320 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2798061 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2799004 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2799340 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2800019 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2800162 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2800886 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2801008 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2801268 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2802474 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2803522 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2803839 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2804033 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2804253 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2804453 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2804611 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2804769 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2805078 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2805272 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2807755 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2807925 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2808087 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2808210 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2808521 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2808652 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2808957 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2809086 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2809255 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2809282 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2809547 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2809566 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2810054 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2810213 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2810423 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2812532 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2815287 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2822923 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2823453 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2825849 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2826131 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2828777 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2832508 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2834712 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2841136 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2846374 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2847690 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2848411 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2859254 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2861546 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2889044 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2892801 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2896697 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2900975 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2900977 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2901645 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2902248 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2902840 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2903236 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2903335 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2903498 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2903637 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2903645 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2904303 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2904837 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2905492 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2905898 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2905900 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2906162 00:32:59.173 Removing: /var/run/dpdk/spdk_pid2907059 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2907783 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2913122 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2941206 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2944636 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2945813 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2947134 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2947277 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2947421 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2947562 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2948000 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2949323 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2950140 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2950493 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2952108 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2952532 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2953091 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2955361 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2958765 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2958766 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2958767 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2960981 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2965718 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2968483 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2972244 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2973197 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2974402 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2975884 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2978654 00:32:59.174 Removing: /var/run/dpdk/spdk_pid2981232 00:32:59.432 Removing: /var/run/dpdk/spdk_pid2983602 00:32:59.432 Removing: /var/run/dpdk/spdk_pid2987835 00:32:59.432 Removing: /var/run/dpdk/spdk_pid2987844 00:32:59.432 Removing: /var/run/dpdk/spdk_pid2990745 00:32:59.432 Removing: /var/run/dpdk/spdk_pid2990887 00:32:59.432 Removing: /var/run/dpdk/spdk_pid2991020 00:32:59.432 Removing: /var/run/dpdk/spdk_pid2991302 00:32:59.432 Removing: /var/run/dpdk/spdk_pid2991418 00:32:59.432 Removing: /var/run/dpdk/spdk_pid2994185 00:32:59.432 Removing: /var/run/dpdk/spdk_pid2994520 00:32:59.432 Removing: /var/run/dpdk/spdk_pid2997193 00:32:59.432 Removing: /var/run/dpdk/spdk_pid2999179 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3002608 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3006200 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3013118 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3017670 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3017673 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3030051 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3030580 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3030987 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3031516 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3032076 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3032504 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3032915 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3033334 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3035837 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3036096 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3039901 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3039954 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3043320 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3045934 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3053598 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3054000 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3056510 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3056684 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3059303 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3062998 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3065146 00:32:59.432 Removing: /var/run/dpdk/spdk_pid3071419 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3076626 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3077920 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3078583 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3089262 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3091623 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3093651 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3098663 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3098724 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3101629 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3103032 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3104436 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3105182 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3106591 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3107461 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3112807 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3113141 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3113532 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3115098 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3115591 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3115892 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3118852 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3118867 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3120348 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3120809 00:32:59.433 Removing: /var/run/dpdk/spdk_pid3120824 00:32:59.433 Clean 00:32:59.433 11:51:39 -- common/autotest_common.sh@1453 -- # return 0 00:32:59.433 11:51:39 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:32:59.433 11:51:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:59.433 11:51:39 -- common/autotest_common.sh@10 -- # set +x 00:32:59.433 11:51:39 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:32:59.433 11:51:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:59.433 11:51:39 -- common/autotest_common.sh@10 -- # set +x 00:32:59.433 11:51:39 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:59.433 11:51:39 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:59.433 11:51:39 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:59.433 11:51:39 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:32:59.433 11:51:39 -- spdk/autotest.sh@398 -- # hostname 00:32:59.433 11:51:39 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:59.692 geninfo: WARNING: invalid characters removed from testname! 00:33:31.751 11:52:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:35.032 11:52:14 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:37.561 11:52:17 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:40.843 11:52:20 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:44.125 11:52:23 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:46.657 11:52:26 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:49.943 11:52:29 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:49.943 11:52:29 -- spdk/autorun.sh@1 -- $ timing_finish 00:33:49.943 11:52:29 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:33:49.943 11:52:29 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:49.943 11:52:29 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:49.943 11:52:29 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:49.943 + [[ -n 2726685 ]] 00:33:49.943 + sudo kill 2726685 00:33:49.954 [Pipeline] } 00:33:49.972 [Pipeline] // stage 00:33:49.978 [Pipeline] } 00:33:49.993 [Pipeline] // timeout 00:33:49.999 [Pipeline] } 00:33:50.014 [Pipeline] // catchError 00:33:50.020 [Pipeline] } 00:33:50.035 [Pipeline] // wrap 00:33:50.040 [Pipeline] } 00:33:50.055 [Pipeline] // catchError 00:33:50.064 [Pipeline] stage 00:33:50.066 [Pipeline] { (Epilogue) 00:33:50.079 [Pipeline] catchError 00:33:50.081 [Pipeline] { 00:33:50.094 [Pipeline] echo 00:33:50.096 Cleanup processes 00:33:50.102 [Pipeline] sh 00:33:50.387 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:50.387 3131385 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:50.399 [Pipeline] sh 00:33:50.678 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:50.678 ++ grep -v 'sudo pgrep' 00:33:50.678 ++ awk '{print $1}' 00:33:50.678 + sudo kill -9 00:33:50.678 + true 00:33:50.691 [Pipeline] sh 00:33:51.017 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:01.028 [Pipeline] sh 00:34:01.315 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:01.315 Artifacts sizes are good 00:34:01.330 [Pipeline] archiveArtifacts 00:34:01.338 Archiving artifacts 00:34:01.489 [Pipeline] sh 00:34:01.774 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:01.791 [Pipeline] cleanWs 00:34:01.801 [WS-CLEANUP] Deleting project workspace... 00:34:01.802 [WS-CLEANUP] Deferred wipeout is used... 00:34:01.810 [WS-CLEANUP] done 00:34:01.812 [Pipeline] } 00:34:01.829 [Pipeline] // catchError 00:34:01.841 [Pipeline] sh 00:34:02.149 + logger -p user.info -t JENKINS-CI 00:34:02.157 [Pipeline] } 00:34:02.169 [Pipeline] // stage 00:34:02.174 [Pipeline] } 00:34:02.188 [Pipeline] // node 00:34:02.193 [Pipeline] End of Pipeline 00:34:02.230 Finished: SUCCESS